Communication is an important part of our daily life, with most of our communication being made up of verbal and nonverbal signals. It’s how we let others know what we are feeling, what we need and how we share our thoughts. In the dairy industry, we also communicate with the cows. However, this communication is not as straightforward. We mostly rely on nonverbal signals of what may or may not be working. We review behaviors and records to try and understand what message they are trying to convey.

Raver katie
Animal Nutrition and Field Support / Rock River Laboratory

Growing technology

Data, when used correctly, can be like a translator between us and the cow. Clean data in particular allows us to correctly translate. I know many times I have relied on Google Translate to help me communicate in a different language; however, it has often led me astray due to the dialect and context of the message. Clean data is akin to an interpreter who has information about the situation and can accurately translate the message. With so many sources of data on a dairy, it can be difficult to keep track and implement a usable program around it. Modern technology has also increased the amount of data coming from each source. However, data from all these sources can be put together to help create a clearer story, interpreting what cows are trying to tell us and, in doing so, maximize profitability.

Tracking data over time allows management teams to create attainable goals and track progress toward these goals. Having a clean dataset makes all of this possible. As growing technology is integrated on a dairy, the phrase “clean data” seems to come up increasingly often. Clean data refers to a dataset free of incorrect, corrupted, duplicated or incomplete data. Although this may seem simple, creating a clean dataset often takes proper planning and forethought into how variables should be named and classified.

Production, reproduction and health records have been tracked for years – and in more recent years, behavioral tracking data was added with technologies such as pedometers, rumination collars and GPS trackers. Typically, tracking and storing this data over time helps unfold the associations between different parameters and production or likelihood of disease. For example, a large drop in rumination may signal an increased risk of disease and could indicate a cow that needs attention. These technologies become increasingly valuable as dairy sizes increase, exponentially growing the power of the data. As a rule of thumb, when sample size increases, variability of sampling decreases.

Changes over time

Often, nutritional data is reviewed as a discrete value at a certain point in time rather than a continuous dataset that evolves over time. This can make it difficult to translate how the cows are responding to feed changes. When we sample a forage, we enter that exact value or a weighted sample average into our ration program and implement changes based on the analysis for that exact ration to be fed until the next sample is taken. This may be adequate for balancing the ration; however, when problems arise, it may be harder to isolate the issue.

Advertisement

In a real-life scenario, you are invited to a dairy to evaluate a recent drop in milk production and they supply you with basic information about the dairy.

  • Situation 1: You’re supplied with the most recent feed analysis, and on the sample, you see that neutral detergent fiber (NDF) is 36.5%, the starch is 34.5%, and 7-hour in-situ starch digestibility is 76%. This sample looks about average; nothing stands out as a culprit for milk loss.
  • Situation 2: You’re supplied with a graph of all corn silage samples which have been tested weekly over the last year. The graph indicates they have just opened a new pile of corn silage. Starch was previously at 36.5%, and NDF was at 34%. The previous silage 7-hour in-situ starch digestibility was 80%.

Without history, this silage may look quite good. However, when able to track this trend and tie it back to a change in feed, we have a more accurate translation of what the cows are trying to say – which in this case is that they are lacking the rumen-degradable starch they were getting from the previous year’s corn silage. Our clean dataset was able to help translate this problem into a clear answer. Although this situation may seem oversimplified, there are many times where I have been able to help colleagues in the industry by simply pulling together datasets, tracking over time and identifying outliers.

We readily recognize forages contribute meaningful variation to rations due to crop year, cutting or field-to-field changes; however, commodity feeds can contribute economically impactful variation as well. Both forages and commodities or purchased feeds with enough data frequency to uncover meaningful variation help in troubleshooting problems, but also provide insight on a multitude of economic opportunities – such as how forage programs are evolving, if they are improving as expected or which feeds provide the most consistent or economic nutrient supply. Tracking forage management practices can help us understand what factors can be controlled to increase the quality and consistency of the crop.

One of the most important steps in being able to track data over time is creating a clean dataset. Modern technologies can make this a much easier task than it once was when we simply relied on records kept by hand. In the aforementioned situation, if the farm did not label the corn silage with the year or pile, it would be harder for us to uncover the reason for a production change. It also makes samples harder to compile and graph when they are all named differently.