Experts provide stock recommendations every day through various channels (TV, Twitter, Print media etc) We need a solution which will gather all the recommendations from websites on a daily basis.
Eg:
[login to view URL]
[login to view URL]
[login to view URL]
[login to view URL]
we need to gather the following information from websites like above :
1. Stock name / ticker symbol
2. Recommendation date and stock price when it was recommended
3. Name of provider: this could be an individual expert for the firm
4. Target price
5. Buy / Sell - Is the recommendation to buy or sell
6. Stop loss price
7. Any other usefull information.
The project is :
1. Check the 4 websites / webpages mentioned above. There are trade recommendations on these webpages. The project is to get stock recommendations from the websites in the above specified format
2. Build a platform which gets this information on a daily basis - This should take care of the data sources already visited and not process them again. it should be easy to add more data sources and schedule their data collection too. For eg: if you can make a crawl plan/data conversion plan for each of the websites then adding new website is easy.
3. Run the platform to get this information and store it in a database.
4. Data collection should be legal and ethical - there should be limits on how many times API is called or a data source is visited. all data collected should be with name of place/person from which it was collected so that we can reference it accordingly.
Java, AWS, Google Could, Python for scrapping or any other technologies which you are comfortable with can be used.
Code which you submit will be checked by our developers for design quality and functional correctness.
Frequently Asked Questions about the project.
Q Is it must to use the above 4 webpages which you mentioned?
Answer: No, you can choose webpages of your choice which give stock recommendations by popular analysts on a daily basis. If these is a RSS feed / API which gets this data, you can use them also.
Q Will we need to crawl and scrape the information from the webpages
Answer: Yes, the platform / your code will need to crawl/scrape the required information. Once obtained it will need to store it in a database. Schema of the table is as below:
1. stock_ticker: Eg. INFY / SBIN
2. recommendation_date
3. expert_name
4. target_price
5. transaction_type : Eg Buy/Sell
6. stop_loss_price
Q. Is it legal to crawl
Ans: Yes it is legal to crawl if the traffic which you increase is not beyond 1% of the website's total traffic.
Q. Can I use python/Java/Perl libraries or platforms
Ans: Yes, you can use anything you want untill the license is open and free for us to use
.