At Clarifai, we have a team called the Data Strategy Team, a group of conscientious, diverse people who train our visual recognition models. They ensure we’re building accurate, unbiased AI – learn about what they do and how you can apply their best practices to your own model building!
Back in the early ages of computing, companies used to rely on teams of people doing complex calculations by hand. These operations now take a fraction of a second for a computer to do. Today, it seems like every company wants to build or incorporate artificial intelligence that can make tasks faster and more accurate than when they were done by humans. However, there are things about the human mind that we are not able to fully replicate with a computer alone (yet!).
We have a fantastic team of minds at Clarifai we call the Data Strategy Team that helps us curate and assess the quality of data for creating robust AI models. Along with research, engineering, and our client success teams, the team distills all the feedback they get from every side on custom models, and work to constantly improve the API in a way that best reflects the big, beautiful world. The team’s diverse backgrounds allow for them to see something that others may not. When building an AI model, the team has to ask, “What are we supposed to see? Is what we are asking to find visible and distinguishable? If we aren’t able to answer these questions ourselves, does it make sense for us to ask a computer to do this?” Here are some of our Data Strategy Team’s tips to consider when building out a model!
Break down the visual components
AI models receive inputs so we need to make sure that our inputs have the correct elements for the model to understand. What if we were to make a model and we wanted it to recognize a leaf on a plant? When we give it several images of various species, we are educating the model on different shapes, colors and textures leaves can take on. We have these visually tangible aspects for it to recognize. What if we wanted to train our model to identify an emotion like anger? Anger is expressed differently by different cultures and people.
When trying to teach a more metaphysical concept, you need to be sure that your input represents those variations as well. Determine the things that represent your concept and make sure that examples of them get incorporated to the training set. This will achieve higher accuracy for what you want your model to focus on. You’ll be able to refine the accuracy after you evaluate your input.
Incorporate relevant training data
One of the biggest misconceptions about AI models is that they recognize everything correctly each time. A model is only as good as the data that is used to train it. A model could fail to make accurate predictions due to training data that doesn’t look like what it will be tested on. Imagine if you wanted to build a model that could detect different items for recycling. Yet, all of your training data is of stock photography of objects that are on tables and being held by people either drinking or eating. When you want someone to use this model, is this the way that we intend to use it? Would the model detect photos of people’s trash bins out in the world? Probably not.
Not only should you make sure the data you incorporate is relevant but also you should make sure that your training data has the same visual aspects as the intended test data. Will your test data be inverted or blurry? Will it be on grayscale versus colorized? These can impact a model’s precision and accuracy too. A model is simply a block of clay and it is your job to make its shape as effective as possible.
Remove biases at all costs
Just like human beings, the artificial intelligence model is susceptible to what it is taught. To the model, the inputs are a source of truth that describe its world and it can only understand the world from its teachings. We have seen this to be true in the extreme cases of Google’s misprediction of tags for photos or Microsoft’s chatbot Tay. When we are shaping the models, we want to make sure that we aren’t introducing any of our human biases.
When you are giving concepts that describe a profession, you may want to be representative of all the demographics involved rather than merely the most prominent. Even well established datasets can be biased to the culture that they were found in. Look at FaceScrub, a popular dataset for celebrity face detection.
This dataset contains white-majority celebrities. We could increase its effectiveness by incorporating more celebrities from other parts of the world. If we don’t acknowledge our biases when we gather a set of data, we only build for what we know rather than looking beyond that.
Where to go from here?
Machine Learning models are often trained on data blindly scraped from the Internet. After all, it’s easy to use search terms on thousands of images and then upload them as training data.
However, this doesn’t reflect on how diverse our world is. With these tips, you are equipped with realizing these nuances and building models that give meaningful results. Know that at Clarifai, we are aware of these possible influencers and we use our Data Strategy Team to carefully improve our neural net models. The team collaborates with our enterprise customers to make sure to address their needs and iterate on building models that can enhance a platform’s experience. If you have any questions or want to learn more about building effective models, reach out to us at hackers@clarifai.com!
The post How Clarifai builds accurate and unbiased AI technology appeared first on Clarifai Blog | Artificial Intelligence in Action.
Top comments (0)