[et_pb_section fb_built=”1″ _builder_version=”3.22.5″][et_pb_row _builder_version=”3.22.5″ border_color_bottom=”rgba(0,0,0,0)”][et_pb_column type=”3_4″ _builder_version=”3.22.5″][et_pb_text _builder_version=”3.22.5″]
Introduction
My primary aim was to predict the sales of an item given the Best Seller Rank on Amazon. Predicting the sales helps me in other use cases like suggesting sellers the best products to sell. My final aim is to provide data insights about any product: How much it will sell as well as when, where and how.
What is Amazon Best Seller Rank?
Best Selling Rank is a ranking system provided by Amazon that is linked to the number of sales of that product. This rank is calculated frequently. An important point to note is that Best Selling Rank is a ‘ranking system’ and by itself it doesn’t mean anything.
A rank of #1, therefore, means that product has sold more than any other product in that category, on that marketplace.
This kind of makes it relatively easy to predict the number of sales of a product if we know the sales of other products ranking close to it.
How did we get the initial sales data? I have been selling professionally on Amazon and have been tracking my own sales vs ranks for all my products in various categories. Additionally I interviewed other professional sellers to get an approximate idea of their sales.
With all the data obtained, cleaned and setup I entered the next phase of design: Choosing the best framework for Predictive Analysis
Enter Spark
At the 2016 Spark Summit Nick Heudecker asked the question Is Apache Spark the future of data analysis?
While no clear answer was provided we certainly know that Spark alone is not the future of data analysis. We depended on a number of third party tools like Stanford NLP for natural language parsing and tensor flow for image analysisbut Spark formed the backbone of almost all of it.
[et_pb_image src=”https://www.aihello.com/resources/wp-content/uploads/2019/04/graph.png” _builder_version=”3.22.5″][/et_pb_image][et_pb_text _builder_version=”3.22.5″]
While there might be some truth to the above chart I tend to believe the Spark has not reached peak hype yet. Or maybe it seems that way from here down under in Australia and probably Spark has surpassed peak in the Silicon Valley.
Spark has played amazingly well with our spring boot application and our standalone machine learning application (Command Line Interface).
Our database is PostgresSQL and Spark CLI programs run weekly reading the PostgresSQL database via JDBC bridge, processes them, builds learning models and saves the trained model to local path.
This trained model is then read by the Spark in Spring Boot to quickly make predictions or process any incoming information from web users in real time. With all the infrastructure setup we had estimated a week to complete the linear regression algorithms or worst case scenario of two weeks if the problem turned out to be the more complex log-linear regression.
Houston, We Have a Problem
I expected a straightforward linear regression model of type y=mx+b This would have made the problem very simple as Apache Spark has GeneralizedLinearRegression
[et_pb_text _builder_version=”3.22.5″ background_color=”#EEEEEE” width=”100%” custom_margin=”|-309px||||” custom_padding=”5px|0px|5px|20px||”]
val glr = new GeneralizedLinearRegression() .setFamily("gaussian") .setLink("identity") .setMaxIter(100) .setRegParam(0.4)
[et_pb_text _builder_version=”3.22.5″]
I first plotted the chart on an Excel Sheet as I had already exported the data from SQL to CSV.
The relationship between Amazon Best Seller Rank and number of sales turned out to be like this chart.
[et_pb_image src=”https://www.aihello.com/resources/wp-content/uploads/2019/04/graph2.png” _builder_version=”3.22.5″][/et_pb_image][et_pb_text _builder_version=”3.22.5″]
So it looked like a log-linear model and I assumed the poisson family of GeneralizedLinearRegression would be a good fit. We changed the GLM family to poisson and ran the tests few more times however the Mean Squared Error & the RMSE was too huge.
I suspect Spark has issues in dealing with sparse data.
We have limited data of our sales and we don’t sell products in all categories since Amazon has gated some categories which left us unable to sell or make any observations on sales for those gated categories.
With limited amount of input data, Spark MSE was too high and even for our products for which we knew the sales, the predicted sales were way off the mark.
Over the next few weeks I spent my time trying out all combinations of Regression family on Spark and none of them gave the desired results. I had absolutely no idea on how to proceed now and this reminded me of this quote by Dan Ariely
“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
My last resort was to use Deep Neural Network Learning. I had completed the Andrew Ng course on Machine Learning when he has first launched it and I cannot recommend it enough. Now my aim was to use Neural Network to solve this problem.
We Must Go Deeper
I eventually made the decision to keep Spark for big data analysis however we would resort to other libraries for Deep Learning.
I narrowed down my choices to two libraries
1) DeepLearning4J
2) Sparkling water- H2O
I was particularly impressed with this article that discusses how H2O deep learning was used to predict crimes and arrests in San Francisco and Chicago. Since our future use cases are similar where we will be predicting fraudulent users and fraudulent competitors I decided to plunge into Deep Learning using H20 rather than attempting to work around with Spark ML issues.
Sparkling Water-H2O runs within the Spark framework so I could use their integrated framework without replacing Spark. This was a definite bonus for me. Also the documentation was nicely done and I was thoroughly impressed with H2o web-UI , Flow.
I could test my data and algorithms on the browser without writing any code.
The Web-UI provided excellent insights into the data and true to my beliefs, the Deep Learning Neural Networks provided exceptional results.
Using the optimized parameters from H2O Flow, I quickly coded the Deep Learning network in my CLI Program.
[et_pb_text _builder_version=”3.22.5″ background_color=”#EEEEEE” custom_margin=”|-319px||||” custom_padding=”5px||5px|20px”]
val train = result('categoryIndex, 'bsr, 'sales)// Configure Deep Learning algorithmval dlParams = new DeepLearningParameters()dlParams._train = traindlParams._response_column = 'salesdlParams._fast_mode = falsedlParams._epochs = 30dlParams._nfolds = 3dlParams._distribution = DistributionFamily.gaussianval dl = new DeepLearning(dlParams)val dlModel = dl.trainModel.get//save the modelModelSerializationSupport.exportH2OModel(dlModel, new File("/data/deeplearning.bin").toURI)
[et_pb_text _builder_version=”3.22.5″]
On the Web API (Spring Boot) application I read this model in and used it for making predictions in realtime from web users.
[et_pb_text _builder_version=”3.22.5″ background_color=”#EEEEEE” custom_margin=”|-323px||||” custom_padding=”5px||5px|20px”]
def startup(): Unit = { dlModel = ModelSerializationSupport.loadH2OModel(new File("/data/deeplearning.bin").toURI) println("Initialization Of BSR Deep learning Module complete")}def predict(categoryIndex:Int, bsr:Int): Double = { if(null==dlModel) { startup() } println("\n====> Making prediction with help of DeepLearning model\n") val caseClassDS = Seq(InputBSR(categoryIndex, bsr, 0)).toDS() val finalresult = dlModel.score(caseClassDS)('predict) val finaldf = asDataFrame(finalresult)(sqlContext) val predictedSales = finaldf.first().getDouble(0) println(s"For category index ${categoryIndex} and BSR ${bsr} the result is ... ${predictedSales}") predictedSales}
[et_pb_text _builder_version=”3.22.5″]
The end result is this:
[et_pb_image src=”https://www.aihello.com/resources/wp-content/uploads/2019/04/predicting-graph.png” _builder_version=”3.22.5″][/et_pb_image][et_pb_text _builder_version=”3.22.5″ border_color_bottom=”#ffffff”]
As you can see from the screenshot, we can only predict sales of products which have Best Selling Ranks for top level category since we have trained the Neural Network with data of sales only from top selling categories.
As we keep collecting data and our algorithm has sufficient confidence to predict sales of lower level categories, the app will start making prediction for more number of products.
This, in my humble opinion, is the best thing about Big Data and Deep Learning. The machine never stops learning and eventually as more data is fed into it, the algorithms automatically start making better predictions.
[et_pb_column type=”1_4″ _builder_version=”3.22.5″][et_pb_sidebar area=”sidebar-5″ _builder_version=”3.22.5″ header_font=”|700||on|||||” header_text_color=”#2b2b2b” body_font=”||||||||” body_text_color=”#55C4A7″ body_font_size=”16px” body_line_height=”1.6em” border_color_all=”rgba(0,0,0,0)” border_width_top=”1px” border_color_right=”rgba(0,0,0,0)” border_color_top__hover_enabled=”on” global_module=”807″][/et_pb_sidebar]