✍🏾

Visual AI: Labeling Youtube Data

Problem:

For many YouTubers understanding their data can be hard, and external sources are needed to better understand their metrics and performance. Performance measurement for many YouTubers is still done by hand.

The problem was found in a conversation with Doc Williams. Where he showed how he was having to manually analyze the performance of his key youtube metrics for each video. As he was explaining his issue, I thought it would be a good use case to leverage deep learning.

Below is a sample of how the data looked when labeled manually. When numbers are in the red this means they are not performing well, when they are yellow they are considered average, and green is considered well-performing.

Each field has a certain threshold that when it reaches a certain range it is classified to a certain color. Below is an example of the formula that was being used to label the data manually. This formula was used to label and clean the data to provide the proper data to train the deep learning model.

Point Range 

Green = 1 point 

Yellow = .5

Red = .1

## Important Metrics:

90 Day Range

**Impressions CTR (Click through rate)**

Range:

7% and above Green

6.99-4.0% Yellow

3.99% and Below Red

### Average View Percentage (AVP)

50% and above Green

49-30% Yellow

29% and below Red

### Average View Duration (AVD)

3.5 minutes and Above Green

3:49-2:50 Yellow

Below 2:49 Red

### **Watch Time**

Above 100 Hours Green

99-45 Hours Yellow

44 hours and Below

Problem-Solving Methods:

Airtable and Data Cleaning

Before I began labeling any data I cleaned the data from videos with any null values, as they will effect the performance of the model.

The first step to solving this problem is to understand what we want to predict. What data points do we want to use as our inputs, and what is the field that we want our model to predict. For this problem specifically, the data first needs to be cleaned and labeled. This then will provide the proper data for our deep learning model.

There are many ways you could clean and label mass data sets, but I used what I was most familiar with which is Airtable. Through it, I was able to easily query and filter data based on values and numeric ranges. (If there is a more efficient way to do this than my methods please let me know!)

Let me quickly reference the formula from the former section:

Point Range 

Green = 1 point 

Yellow = .5

Red = .1

As I filtered the data through each view I added points based on the parameters. For example in the view AVP > 50 I added +1 for in the formula field. I did the same for others like when CTR < 3.99 I only added .1 . I used this method and added points based on the performance in each field.

image

Once all the points were added, I then developed a method for classifying the video based on its formula.

Formula < 2.0 = Unhealthy
Formula > 2.0 and < 2.5 = Average
Formula > 2.5 = Great

image

Once this was all complete I exported this dataset to csv and prepared to use the data for training in the peltarion platform.

Training The Model

I ran various experiments and used various techniques in the Peltarion platform to get the best results. Since this is a smaller dataset their things that need to be done to ensure the model can make relatively correct predictions. I will not highlight or discuss the past experiments due to time, but I will show the metrics for the best experiment, and the techniques used. Although I think the model can be improved, I feel with such a small dataset it performs well.

Before you look at the metrics, and techniques used. To gain a better understanding of what the numbers mean look for the terms in the Peltarion Glossary.

When uploading the dataset into Peltarion you have a few key things:

Each definition is pulled from the Peltarion Glossary

Features

A feature is an input variable. In the datasets table view it is represented as a column.

Multiple features are usually grouped in a Feature sets.

Example: A house can have the following features: number of rooms (numeric), year built (numeric), neighborhood (categorical), street name (categorical), etc.

Subsets

A subset is a smaller set of your dataset. For the purpose of training a model, you usually subdivide your dataset into three subsets: training set, validation set and test set.

Inputs

Input is the series of examples fed to a layer.

Targets

Range: -1 to 1 - Two classes

Target represents the desired output that we want our model to learn. In the case of a classification problem, the targets would be the labels of each of the examples in the training set.

In training the model I used a total of 5 features, 4 for as inputs variables and 1 as the target.

I used Average % Viewed, Impressions CTR (%), Average View Duration, and Watch Time as features. Which are the input data I will use to determine the target. For the target field, I used the "Select" field which is where the label classification data is stored.

Before I used the model in an experiment I tweaked the subset percentages slightly to provide more data for training and validation. The generic setting is usually set to 80% training data and 20% test data. But instead, I did a 60% training, 20% test, and 20% validation split.

Now the model is ready to train luckily the platform provides pre-trained neural network snippets to train your data with. This problem is considered a classification problem, so I used the tabular snippet.

image

As you can see the model is able to clarify classes accurately, but the model's precision performance is not as high. So there is room for improvement, this may be more data or trying other techniques. (To gain an understanding of the metrics please refer to the glossary)

image

image

Deployment

When you are ready to deploy a model in Peltarion all you simply have to do is go to new deployment in the deployment tab of your experiment. Then select the experiment you want to deploy via your own API that Peltarion provides.

To use the model I connected the API to Bubble.io via the API connector plugin.

So the goal of developing this deep learning model is to be able to label each item in a spreadsheet with a specific value based on certain YouTube metrics. In order to do this, you would need to loop each item through your Peltarion API. In bubble, there are various ways to do this. But I decided to just use BDK Native's Utilities plugin which has a list processor tool in it.

The Bubble Logic for a user to analyze their own data is built like this:

User uploads data via csv → A new thing called "Dataset" is created →

Each Item in That CSV is converted to a thing called "Video" → These Video's are added to the Dataset →

User is sent to a "Dataset page" → User Clicks Analyze Dataset → Each item in the dataset is sent to the API via the List processor →

A prediction is made on each Video based on it's metrics → It is then labeled based on the output of the prediction.

There are of course other nuances, but this is in general how the app works.

Solution:

Below is an example video of the deep learning model at work in bubble.io.

Problem Solved!

Don't hesitate to reach out with any questions or suggestions to me via twitter