The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Od 362,356 recenzí, klienti hodnotí našeho pracovníka Web Scraping Specialists 4.9 z 5 hvězd.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Od 362,356 recenzí, klienti hodnotí našeho pracovníka Web Scraping Specialists 4.9 z 5 hvězd.I need a clean, freshly-sourced list of 5,000–10,000 tech-startup contacts for a one-to-one outreach campaign promoting GrowthAI’s free trial. Every record has to come from information that is already public—think company websites, press pages, blog author bios, event directories, Crunchbase-style listings—never scraped LinkedIn data, leaked dumps, or anything that could be considered private. What the sheet must contain • Company name • Website URL • Contact name (when it’s on the site) • Public business email only (no personal Gmail/Yahoo unless the firm itself lists it as its main contact) • Industry tag • Country Target profile • Primary industry: Tech Startups • Regions: North America, Europe, and ...
Please Read Carefully Before Applying It does not matter whether you consider yourself a “vibe coder” or a traditional software engineer we accept both here. What matters is whether you can make this system work reliably at scale. We operate a production scraper that processes 500+ leaderboard sites per hour. All sites we scrape are leaderboards, but no two sites are the same. This is not a basic scraper. What Makes This Scraper Different The leaderboards we scrape vary heavily in structure and behavior: Dynamic buttons, tabs, and switchers JavaScript-rendered content Hybrid navigation (UI interaction + background API calls) Tables, card layouts, podium layouts, or combinations of all three Masked usernames and inconsistent rank formats Different ordering of wager / prize data ...
I need a small proof-of-concept scraper written in Python that pulls user information from a set of static website pages and exports it into a clean CSV file. The pages load without JavaScript, so a lightweight stack such as requests + BeautifulSoup (or lxml) should be all that’s required; no browser automation is necessary unless you can justify a clear advantage. I will supply the page URLs and highlight the exact fields to capture (name, profile link, location, and any other visible user meta). Your code should handle pagination where applicable, respect polite crawl rates, and be easy for me to adjust if the HTML structure shifts. Deliverables • Well-commented Python script (.py) • Sample CSV containing the extracted records • README with setup steps and a qu...
I need a high-end automation tool for a scheduling portal. Requirements: Handle 50+ unique browser profiles. Integrated media stream handling for verification steps. Automation of form filling and fast navigation. Anti-detection measures to avoid bot blocks. Budget: $1,500. Milestone based only.
I have an existing Flutter mobile app (Firebase backend + RevenueCat + web scraping). Most functionality is already implemented. I need an experienced Flutter developer to update and refine several features. I believe this should not take more than a few days for an experienced developer. Scope of Work: 1. Sync Local Storage with Firestore (Offline Support) - Keep using local storage for offline mode - Sync shift data with Firebase Firestore - Handle offline → online auto sync - Prevent duplicates (unique shift ID) - Secure Firestore rules (user-only access) - Ensure cross-device sync works properly 2. Fix Email Verification (Spam Issue) - Configure Firebase Auth to use custom domain () - Set up SPF, DKIM, DMARC - Improve email template - Ensure emails land in inbox (Gmail/Outlook) ...
The contractor is commissioned to download DRM-protected videos from an online portal to which the client has legitimate access and usage rights. The videos must be processed as follows: - Download approximately 240 videos from the portal with about 18 hours video material - The videos have an average length of approximately 5 minutes - Original video titles must be preserved - The videos must be organized into folders according to the portal order/structure - All files must be uploaded and stored on Google Drive - The final folder structure on Google Drive must be same like on the portal
I need a reliable specialist who can log into our dealership’s backend every weekday, pull fresh customer information, and feed it straight into our call-tracking platform the same day. The only data I’m after are contact details and service records—nothing else—so the extraction script or manual process can stay laser-focused on those two fields for speed and accuracy. Turnaround is critical. If you can set this up and have the first full export/import cycle running smoothly right away, I’m happy to add a rush bonus on top of the agreed rate. Accuracy must be spot-on and the data has to land in the tracking system without duplicates or formatting hiccups. Deliverables each weekday: • Clean export of new customer contact details and service record...
INGENIERO SENIOR DE IA: SISTEMA RAG MULTIMODAL ON-PREMISE CON APRENDIZAJE CONTINUO 1. CONTEXTO Y DESAFÍO REAL proyecto del sector de la trefilería y el galvanizado con más de 40 líneas de producción activas. desafío no es la falta de información, sino que el conocimiento crítico es volátil: reside en la experiencia de supervisores y operarios veteranos y se transmite de forma verbal. Cuando surge una solución técnica en planta, esta no se documenta y se pierde para el siguiente turno. Buscamos desarrollar un ecosistema de IA que no solo responda preguntas, sino que capture y democratice el conocimiento técnico que surge en el día a día. 2. LA SOLUCIÓN: "THE KNOWLEDGE LOOP" B...
I have an Excel template ready and a list of items I need populated with reliable, up-to-date product details. For every product on the list, please pull information only from official brand websites, leading eCommerce platforms, and the customer-review sections of those sites. What I expect captured for each item: • Current price and stock status • Key features or technical specifications exactly as stated by the manufacturer or retailer • Average customer rating plus any standout review insights (e.g., “4.5/5 from 230 reviews”) Accuracy matters more than speed, so cross-check conflicting figures before entering them. Add the source URL next to every data point so I can verify quickly. Once the sheet is complete, send it back in the same format&mdash...
I have a spreadsheet with 200 U S-based websites and I need the direct phone numbers of each owner. The numbers are not published on the sites themselves, so please pull them through your own account. Alongside every number, include the owner’s LinkedIn profile URL; no other fields are required. What I expect from you • A clean CSV or Google Sheet with three columns: Website, Owner Phone Number, LinkedIn Profile • Accuracy checked against Apollo’s latest data • Completion within 24 hours of project acceptance This is a quick job for an experienced user. I will review the sheet immediately and release payment within 24 hours once the data is verified.
Every week I compile a fresh list of Danish houses and apartments that may have changed hands in the previous seven days. Your job is to open the specific web link I supply for each property and confirm whether the listing now shows as “Solgt / Sold.” No phone calls to agents, no site visits—everything happens inside the browser, one URL at a time. I need someone who can commit to roughly 50 hours of this work each week on an ongoing basis and who is comfortable updating a shared Google Sheet (or Excel file, if you prefer) as you go. For each address you will: • mark the sale status (Sold / no data) • attach or link a screenshot of the listing as proof That’s it. The task is straightforward but must be done manually—no bots or scraping tools. Wh...
I'm looking for a comprehensive list of home decor small businesses in Florida. The list should be organized by city and delivered in an Excel spreadsheet format. Requirements: - Contact details - Product offerings - Customer reviews - Categorized by city Ideal skills and experience: - Attention to detail - Experience with data collection - Proficient in spreadsheet software
I want to replace several manual reporting routines with an end-to-end AI workflow that ingests data from our internal finance databases and live web sources, then produces clear, timely analytics for management. Reporting and analytics are the sole focus—no transaction execution—so the system must excel at pulling, cleaning, and interpreting numbers rather than booking them. We also want to compare legal documents vs term sheets and excel spreadsheets Data sources • Company databases (SQL, flat files, Excel exports) - Dropbox all our files are in drop box • Extensive web scraping for competitor benchmarks and investment-market signals If you have ideas for safely adding external financial APIs later, let me know, but the two feeds above are mandatory. - Th...
I need an experienced engineer to analyze and improve a high-demand online booking workflow so bookings can succeed reliably even under extreme traffic. I already have a working Playwright-based browser automation, but during peak demand all sessions currently land on a “high demand / unavailable” state. The goal is to improve success rate through deeper system understanding, better timing, and smarter flow control . The work involves analyzing booking flow and state transitions, understanding how availability actually appears during high demand (including delayed or staggered releases), improving timing, retries, waiting strategy, and navigation logic, eliminating race conditions and aborted navigations, and designing the automation to be long-running, reactive, and resilient ...
Hi. I have an Excel list with around 170.000 local businesses in Spain with their emails: "File 1". I have another Excel list with around 65.000 local businesses with their web sites but without email addresses: "File 2" Tasks that I need from your site 1º With the help of IA or another tool to check every web site of every business in the "File 2" to try to obtain their emails. I will add the emails obtained in the task 1º to the "File 1" to create a complete file, File 3. 2º With the help of some tool to verify if the email of every business in the File 3 is active. 3º With the help of some tool to verify if the website of every business in the File 3 is active. 4º With help of IA or another tool check every activ...
I am looking for a full-stack developer to build a Web Compliance Audit platform. The goal is to create a solution that scans websites for GDPR, Privacy, and Cookie compliance using AI (OpenAI API) and generates professional audit reports. The front-end will be WordPress (for user management, payments, and UI), while the "brain" will be a Python-based engine running on a Linux VPS. Key Features & Requirements: 1. WordPress Frontend: Landing Page: Professional UI where users can enter a URL for a "Quick Scan." User Dashboard: Where clients can see their scan history and download PDF reports. Payment Integration: WooCommerce or Paid Member Subscriptions (Stripe/PayPal) for one-time reports or monthly monitoring plans. API Integration: A custom function to send the U...
I need a developer to collect data from multiple public websites and deliver it in a clean, structured format. This is for legitimate data extraction from publicly available pages. I will share the target URLs and exact data fields with shortlisted candidates. Scope of work Scrape data from multiple public websites (details shared after shortlisting) Extract specific fields consistently and handle pagination/filtering where needed Normalize/clean the data (remove duplicates, consistent formatting) Export results to CSV/Excel/JSON (format to be confirmed) Provide a repeatable solution (script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Resp...
Hi, ******Need someone ONLY AND ONLY from AUSTRALIA AND UK ONLY for my betfair apis development. I’m expanding an existing trading suite and now need clean, reliable access to Betfair’s Market Data API. Because of licensing considerations, I can only work with developers physically based in Australia or the United Kingdom. Scope of work – Connect to the Betfair Exchange API (non-interactive login is already in place on my side) – Retrieve real-time odds, price ladders and market status updates for Horse Racing, Football, Tennis and Cricket markets only – Structure the responses so they can be consumed by my Python back-end (JSON is fine) – Handle throttling limits and session renewal gracefully Deliverables 1. Well-commented source co...
I need a reliable LEGAL COMMENT & researcher who can MAKE legal COMMENTRY ON OUR OLD company filings from the U.S. SEC’s EDGAR database for publicly traded companies. The immediate task is to locate, download, and organise the relevant documents in a structured way that lets me review them quickly—filing date, form type, company name, CIK, and a direct link back to the source must all be captured. Acceptance TO BE CAPABLE OF PROVING WHETHER TO REESTABLISH THIS OLD APPLICATION. Turnaround is flexible but please outline how long you need per 100 filings so I can plan the next milestones.
I’m looking for a large data dump of UK-focused domains in the property, home improvement/construction, landlord, property service niches. The aim is to find unloved, good quality SEO websites with affiliate or lead gen potential that i can purchase and grow. See attached for the types of websites/niches that would be suitable. This is a quantity-first task, no manually website review or qualitative judgements required - i will outreach. Use Ahrefs or Semrush only to export domains that rank organically for broad UK property topics such as: landlord guides/compliance, property finance (BTL, bridging), property investment & deal sourcing, refurb/renovation planning, EPC & energy efficiency, planning permission & permitted development, HMO licensing, serviced accommodat...
I have an urgent need for a clean, well-structured dataset containing the listing agent’s first name, last name, mailing address, and phone number for well over 500 active Zillow listings. Speed is critical, but accuracy matters just as much; the final file should be ready for immediate import into my CRM. You are free to use whichever stack you prefer—Python with BeautifulSoup or Scrapy, Selenium, residential proxies, even the unofficial Zillow API—so long as rate-limits are respected and the data is complete. I don’t need property details or price history; the focus is strictly on the agent contact fields. Deliverables • CSV or XLSX with a separate column for each required field • A short read-me explaining the script or method so I can rerun it la...
I have a set of websites whose data I need to capture automatically, and I want the whole process built as a reusable Apify actor. I will share the exact URLs, the fields to be collected, and the desired output format once we agree to proceed, but the common theme is structured extraction (think product specs, profile info, or similar). Here’s the outcome I’m expecting: • A clean Node.js actor that runs on the Apify platform, uses the latest Apify SDK, and follows best practices for request queuing, proxy rotation, and error handling. • Configurable input schema so I can plug in new target URLs or tweak search parameters without touching the code. • Output saved to an Apify dataset (JSON/CSV) and pushed to my Google Drive via webhook on each successful r...
Looking for a technical expert to build a custom workflow on n8n. Required Stack: • n8n (advanced logic & error handling) • OpenAI API integration • HTTP Request / Webhooks / API connections • Database management (Airtable or similar) The Project: Build a scalable infrastructure to connect lead data with AI-driven personalization and automated outreach. The system needs to be modular so it can be replicated easily. Detailed technical requirements and the specific workflow logic will be provided during the interview. To apply: 1. Send a screenshot or video of your most complex n8n workflow. 2. Tell me which hosting you recommend for n8n to ensure 99% uptime. 3. Quote me your price for a 30-day build & test period."
I need a clean pull of every location listed on For each branch please capture: country, state, complete address, service type, phone number, and email address. The final deliverable is a single Microsoft Excel workbook containing one sheet only. All columns should be clearly labelled and the range converted to an official Excel Table so I can apply native filters instantly. No additional filtering is required on your side; just be sure the table structure supports easy filtering by any column once I open the file. Accuracy matters more than speed—every location on the site has to be included and the contact details must match what is shown online. When you hand over the file I will spot-check a sample of entries against the live site to confirm completeness and correctness bef...
We are mid-migration to Retail Express and must have every product from 33 suppliers—about 4,000 SKUs—ready for a single, clean upload. I already have Retail Express’ import template, plus partial Excel sheets from each supplier, yet gaps remain. Your task is to take the existing files, reconcile them against the template, and fill in any blanks you discover for the three critical fields: • Missing SKUs • Missing Barcodes • Missing Descriptions Where information is absent, you’ll need to source it directly from the supplier catalogues or websites, confirm accuracy, and then complete the master spreadsheet. Once validated, the final file must load into Retail Express without errors or duplicates. Success is measured by a fully populated impo...
I need Octoparse templates built for roughly fifty manufacturer sites in the flooring & renovation niche. Each template must crawl the full product catalog and push clean, structured data into my Supabase database. The extraction scope includes: high-quality images, complete text descriptions and feature lists, links to warranty documents or other disclosures, detailed dimensions and specifications, style and color information, collection / color-family, and every SKU shown on the page. Price data is nice-to-have when present, but its absence should not break the run. Many product pages list matching accessories (trim, transitions, quarter-round, etc.). Your logic must identify those by shared style and color so they enter the database as related items. Typical sites you will start ...
I need an Apify actor that crawls a single website and delivers two things for every page. You can use Puppeteer, Playwright or any other Apify helper library that keeps the run stable and fast. Here’s how I see the workflow: • I’ll share the target domain, URL pattern, and the exact text blocks I care about. • You create or fork an Apify actor in JavaScript/TypeScript, configure the request queues, handle pagination where needed, and store results in a dataset. • The final dataset should export cleanly to JSON and CSV, and the image URLs should be downloadable in bulk (a simple link list or an Apify key-value export is fine). • When the crawl completes, I want a brief README so I can rerun it myself later without touching the code. Acceptance crit...
We’re consolidating a Shopify B2B store into a blended B2B/B2C store. This is a data migration project, not theme or app development. Scope: Export Companies, Company Locations, and Customers via Matrixify Clean and normalize large CSV datasets (dedupe emails, fix relationships) Rebuild import-ready files that match Shopify B2B requirements Import in the correct order (Companies → Locations → Customers) Validate that customer ↔ company/location links are intact Requirements: Hands-on experience with Shopify B2B (Companies + Locations) Matrixify migrations involving Companies, not just products Comfortable handling large datasets (automation preferred)
I’m looking for a dependable script or lightweight application that can collect sports betting odds from a web-based platform I have access to and export them into a structured Excel (XLSX) file. The initial focus will be on outright winner markets for: Golf Cycling Baseball The Excel output should remain clean and well-organized, grouping rows by sport, league, and event, so the data can be easily filtered and analyzed later. Update Frequency: Data refresh every 5 minutes Real-time or in-play updates are not required Accuracy and stability are more important than speed Technical Expectations: Ability to handle dynamic web content Robust approach that runs consistently over time Technology stack is flexible (Python, browser automation, or other suitable solutions) Clear...
I already have valid login credentials to my coaching app, but the platform doesn’t give a built-in option to save lessons offline. I need every video class I’m enrolled in pulled down from both the Android app and the web version, then handed back to me neatly organised (Course → Module → Lesson, MP4 or the source format). Use whatever reliable method you prefer—Python scripts, yt-dl, network-capture tools, or similar—to grab the streams, keep the original resolution, and avoid quality loss. A light DRM layer may be present, so prior experience with HLS/DASH stream extraction will help. Deliverables • Full set of video files, correctly named and structured • Short guide or reusable script so I can repeat the download when new classes appea...
Need previously scrapped truepeoplesearch data. Only bid if you already have the dataset.
I already have a scenario set up and ready to run—what I’m missing is an active, fully-functional LinkedIn Sales Navigator seat that I can connect my Vayne module to. If you currently hold such an account and can grant API or session-cookie access (whichever method you normally use for integrations), let’s work together. Once connected, I’ll handle the filtering logic inside , but I need your account to serve as the data source and, ideally, your guidance to be sure the pull limits stay within LinkedIn’s acceptable use. When you reply, focus on your experience—how you’ve successfully linked Sales Navigator with automation platforms before, any anti-scrape precautions you follow, and typical daily search volumes you’ve handled without issu...
We are looking for a freelancer to build a database of 1,000 active iGaming websites (casino / sportsbook / betting operators). Scope of Work: You will identify and collect 1,000 unique, live iGaming operator websites and enter them into our provided form/database. Each record will have several different data points, including but not limited to: - Website URL - Contact emails - Company / Brand Name - Country / Jurisdiction (where the operator is based or licensed) - Website Languages (select all that apply) - Availability of Providers Important Guidelines - Only live, operational websites (no affiliates, review sites, or news portals) - No duplicates (we control this in the record entry form) - We prefer speed and volume over excessive research - Do not spend significant time trying to ...
I want a Telegram bot that can reliably extract the client’s phone number, the property owner’s number, and the unit number from listings on Bayut, Propertyfinder, and similar real-estate sites—even though these fields aren’t shown in the public UI and no official API is used. Here’s the flow I’m after: I drop a listing URL (or several) into the chat, the bot quietly scrapes the page, jumps through whatever loophole is needed to reveal the hidden contact and unit details, then replies with a single, structured template that looks something like: Property: <Title> Unit No.: <unit_number> Client: <client_phone> Owner: <owner_phone> Source: <URL> Key points • No reliance on the Bayut or Propertyfinder APIs&m...
I have an unstructured text file that needs to end up as clean, well-organized rows and columns in Google Sheets. The data will come strictly from this file, not manual keying or an API, so the first step is parsing whatever patterns you can detect—line breaks, repeating markers, dates, or any other cues that let you segment the content logically. Once parsed, I want each logical field mapped to its own column in a Google Sheet I’ll share with you. If a repeatable rule can be established, please codify it in either Google Apps Script or a small Python script so I can reuse the process whenever a new file arrives. Deliverables: • The populated Google Sheet, fully checked for alignment and obvious anomalies • The script or documented method you used to transform th...
I need to scrape a website with public content and export it to an organized Excel file. It's approximately 700k pages with specific data. Some of the data I need is missing, but I have an Excel file with this data that should be used to autocomplete the missing information. In summary: 1. Scrape the website to an Excel file (I will give an example) 2. Autocomplete the missing information based on my Excel file. After this project, I will need another one, which will require finding contact information with some precision, perhaps using AI or some specific logic, but that will be a topic for later. Thank you all
I’m interested in buying an existing dataset you’ve already scraped from TruePeopleSearch. What I specifically need is the contact information section—phone numbers for sure, and any email addresses or other direct lines you may have captured at the same time. I don’t need the address history or relatives/associates fields, so feel free to leave those out if they’re present. This is not a fresh scrape request; I only want data that you currently have on hand. Anything compiled within roughly the last twelve months is perfect, but older archives could still be useful if they’re large and well-structured. I’m flexible on format: CSV, Excel, or JSON all work. Just let me know which one your files are in, when the data was pulled, and roughly how man...
I need help compiling a clean, well-structured spreadsheet of information pulled directly from publicly available websites and social media platforms. Scope • Locate the pages I specify (or similar ones you suggest) and extract relevant text content—company descriptions, service lists, post captions, etc. • Capture all visible contact information on those same pages, including email addresses, phone numbers, and any listed location details. Requirements • Manual collection only; no automated scraping tools that violate terms of service. • Record each source URL alongside the data so I can verify accuracy. • Maintain consistent formatting in Excel or Google Sheets—one row per entity, separate columns for each data point. • Double-c...
I need a reliable researcher who can pull company filings from the U.S. SEC’s EDGAR database for publicly traded companies. The immediate task is to locate, download, and organise the relevant documents in a structured way that lets me review them quickly—filing date, form type, company name, CIK, and a direct link back to the source must all be captured. Scope • Search the EDGAR system and collect every filing that matches the criteria I’ll send (ticker list or CIKs). • Save each document in its original format (HTML, PDF, or XBRL when available) and label the files clearly. • Build a spreadsheet that lists each filing with the key metadata mentioned above. Acceptance The work is complete when I receive a zipped folder containing the documents an...
I need a detail-oriented data entry operator who can take raw information and input it manually with flawless accuracy. The task is straightforward but demands precision: transfer data from source documents into the designated spreadsheets and cross-check entries to avoid any discrepancies. You’ll be working with standard tools—Excel or Google Sheets—so please be comfortable navigating formulas for quick validation and spotting inconsistencies. Speed is welcome, yet accuracy is non-negotiable; I will run random checks to verify every batch before sign-off. Deliverables • A fully populated, error-free spreadsheet formatted exactly as the template provided • A brief changelog noting any unclear or missing source information you encountered I’m r...
Senior Automation Engineer Needed – CRE Deal Intake AI Agent (Email → NDA → OM → Drive) We are a commercial real estate investment firm receiving hundreds of broker emails daily with new opportunities. We already have: ✅ Email classification ✅ CRM creation ✅ Email thread summarization ✅ Draft replies ✅ Google Drive → CRM ingestion pipeline What we need built: A deal intake automation agent focused ONLY on handling inbound opportunities. This is NOT a ChatGPT bot. This is a production automation system. ⸻ Core Responsibilities Build an agent that: 1. Gmail Monitoring (3 inboxes) • Watches Gmail via API • Detects deal-related emails • Identifies: • OM attachments • OM links • NDA links • NDA attachments ⸻ 2. Pro...
I track a long list of OTC tickers and need a hands-off way to grab every historical and new financial report that appears on the “Filings & Disclosure” section of otcmarkets.com. At the moment I only care about the PDFs of annual, quarterly and interim filings, but the solution should be flexible enough that I can later extend it to press releases or historical data if required. Here’s what I expect: • A script (preferably in Python 3 using requests / BeautifulSoup or Selenium if necessary) that accepts a plain text list of symbols, checks each page once per day and downloads any financial report that is not already saved. • Folder or filename logic that organises the PDFs by ticker and date so nothing is overwritten. • A simple log or CSV that r...
Looking for Freelancers – Data Research Task I’m looking for experienced freelancers based in or familiar with Erode, Coimbatore, and Salem to help with data research. Data required: Engineering colleges with active placement records Contact details of: Training & Placement Officer (TPO) Chairman Principal Details needed: College name City Official email IDs Phone numbers (where available) College website
I need a lightweight, well-structured scraper that can navigate a specific website and harvest two pieces of information from every public user profile: the profile picture (image file or URL, whichever is more practical) and the bio section. No emails or activity logs are required—just these profile details. Your script should: • Visit every reachable user profile, including through pagination or internal search pages • Extract the bio text and the profile picture, storing the image locally or saving its direct link next to the bio in a CSV/JSON file • Respect , employ modest request throttling, and handle the site’s usual edge cases—lazy-loaded images, occasional 4xx/5xx responses, and any login or cookie notices that appear for anonymous visitors ...
I need every post on our Instagram feed gathered into a single place—148 in total. Simply open each image, copy its share URL, and paste it into one document. A plain-text list or a basic spreadsheet is fine as long as the links are in their own rows or lines; no captions, dates, or metrics required. Accuracy matters more than speed, but this is a quick task: double-check that each link opens the intended post and that none are missed or duplicated. Feel free to send the first five links as a quick sample so I can confirm the format before you finish the rest. Once all 148 URLs are compiled, send the file back and the job’s done.
I want to automate the way I gather labour-market data. The goal is to build a single, reliable workflow that: • Collects fresh job postings from LinkedIn, Indeed and HelloWork. • Captures, at minimum, the job title, full description, company name and location. • Stores everything in a structured database I can easily query or export. • Retrieves complete CVs from LinkedIn and, when possible, other social platforms, then links each profile to the same database scheme. Feel free to choose the most stable stack you trust—Python with Scrapy or Selenium, Node with Puppeteer, direct GraphQL or REST endpoints, etc.—as long as it runs unattended, copes gracefully with rate limits / captchas, and offers a simple way for me to schedule or trigger updates. Ac...
We are looking for a freelancer to build a database of 1,000 active iGaming websites (casino / sportsbook / betting operators). Scope of Work: You will identify and collect 1,000 unique, live iGaming operator websites and enter them into our provided form/database. Each record will have several different data points, including but not limited to: - Website URL - Contact emails - Company / Brand Name - Country / Jurisdiction (where the operator is based or licensed) - Website Languages (select all that apply) - Availability of Providers Important Guidelines - Only live, operational websites (no affiliates, review sites, or news portals) - No duplicates (we control this in the record entry form) - We prefer speed and volume over excessive research - Do not spend significant time trying to ...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
Targeting local service based businesses with high customer value in following niches for google ads: 1. Plumbers 2. HVAC 3. Electricians 4. Roofers Location: Top high paying US cities (in EST preferred): Fields required: 1. Email (ofcourse) 2. business name 3. niche/category 4. city Need to send sample of 50-100 leads when requested (before awarding the project). Once the project is awarded: 1. Need to send verification/validation status of the emails as well. 2. Need to disclose the source of where they're scraped from (google maps, facebook, yellowpages etc. - basic idea) and what verification/validation service API is used (eg. reoon, millionverifier, zerobounce etc). Note: 1. Time wasters please stay away. 2. Need more leads after first 12k leads batch (future opportunit...
I need every bicycle, accessory, and clothing item currently sold on imported into my own e-commerce site. Each listing on my end must carry over the full product description, the current price, and all available images exactly as they appear on Bike24. My store is already live; what I am missing is a reliable, repeatable way to pull in this catalogue and keep it in sync. A custom scraper, API connector, or an import script that feeds directly into my CMS are all acceptable—as long as it works smoothly and can be rerun whenever Bike24 updates their stock. To be crystal-clear, the finished job is considered complete when: • All bikes, accessories, and clothing from Bike24 are visible on my site with correct titles, descriptions, prices, and image galleries. • Products ...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.