The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
从367,777个评价中,客户给我们的 Web Scraping Specialists 打了4.9,共5星。Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
从367,777个评价中,客户给我们的 Web Scraping Specialists 打了4.9,共5星。We are looking for an experienced developer who can build an automated system to extract daily newly incorporated company data from the MCA (Ministry of Corporate Affairs) website – https://www.mca.gov.in. The system should automatically collect and deliver the list of companies incorporated each day in structured format (Excel / CSV / API / Database). Scope of Work: Develop a web scraping or API-based solution to extract daily incorporated company data from the MCA portal. The tool should automatically fetch newly incorporated companies every day. Data should include the following fields (minimum): CIN Company Name Date of Incorporation ROC (Registrar of Companies) State Company Type (Private Limited / LLP / OPC / Public Limited) Authorized Capital (if available) Regist...
I have a steady stream of healthcare-related forms that must be transcribed quickly and with absolute accuracy into our online system. Most files arrive as scanned PDFs; I will share them through Google Drive. Your task is simply to transfer every field into the corresponding web form, double-checking spelling, dates, and numeric values before submission. A medical background is not required, but an eye for detail is essential because patient names, prescription codes, and insurance numbers cannot contain typos. If you already work confidently with typical data-entry tools—Chrome autofill blockers, basic Excel for cross-checking totals, or any preferred keystroke-automation software—feel free to use them as long as the final entries match the source documents exactly. I w...
We are looking for an experienced developer to build a robust web scraping solution capable of extracting structured data from a login-protected medical/drug repository website. The platform contains a large database of drug information (potentially hundreds of thousands to over a million pages). The scraper should be able to navigate through the website after login, systematically extract relevant drug data, and store it in a structured format. Scope of Work: Develop a scraper that can log into a protected website. Navigate through the drug repository pages. Extract structured information from each drug page. Handle pagination and large-scale crawling. Implement mechanisms to prevent crashes or interruptions during long scraping runs. Store extracted data in a structured format such as ...
I need a fresh batch of 500–1,000 verified email addresses belonging to small-to-mid size private-label Amazon FBA sellers. Ideal targets are doing roughly $50k–$500k per month in Beauty & Personal Care, Health Supplements, Pet Supplies, Home & Kitchen, or any other category prone to counterfeiting. Sellers may be based anywhere—US, India, Europe, Africa and beyond—so long as the contact data is accurate and legal to use. My outreach platform flags hard bounces, so please lean on reputable public data directories, Hunter-style email verification tools, and—when useful—social media profiles to keep the list clean. No gray-area scraping. Deliver the finished file in Google Sheets or CSV with these columns: • Seller / Store name • Mai...
I need a quick data-grab from my FootyStats account. Once I share the login details and the exact list of leagues, simply navigate to each competition and download every available Team statistics spreadsheet. It should come to roughly 250 individual CSV/XLSX files. Only team stats are required—skip the match or player datasets. After all files are downloaded, place them in one organised folder, compress it into a single ZIP archive, and send me the download link or attach it here. Confidentiality is important, so please handle the credentials securely and delete them once the job is done. As soon as I receive and verify the ZIP (correct leagues, no corrupt files), the task is complete.
I have a collection of customer reviews that must be transformed into clear, data-driven insight. The raw files might arrive as CSV, Excel, or even straight from a database—the format is flexible—so the first task is to clean and standardise whatever I supply. That means handling empty rows, removing noise (HTML tags, punctuation, stop-words, etc.) and normalising the text. Once the data is tidy I need robust sentiment classification. Please build the pipeline in Python, making sensible use of NLTK and/or TextBlob for tokenisation, lemmatisation and polarity scoring. The reviews span more than one language, but the immediate focus is on the English subset; your code should therefore detect language and process only English for this milestone while staying extensible for future...
I have a collection of customer reviews that must be transformed into clear, data-driven insight. The raw files might arrive as CSV, Excel, or even straight from a database—the format is flexible—so the first task is to clean and standardise whatever I supply. That means handling empty rows, removing noise (HTML tags, punctuation, stop-words, etc.) and normalising the text. Once the data is tidy I need robust sentiment classification. Please build the pipeline in Python, making sensible use of NLTK and/or TextBlob for tokenisation, lemmatisation and polarity scoring. The reviews span more than one language, but the immediate focus is on the English subset; your code should therefore detect language and process only English for this milestone while staying extensible for future...
I have a single website that lists venues and I need a clean spreadsheet pulled from it. Once we start, I will share the exact URL so you can inspect the structure before you begin. For every venue that appears on the site, I want these fields captured: • Venue name • Email address • Phone number • Full physical address Please scrape the entire catalogue—restaurants, event spaces, hotels or any other venue type the site includes—then deliver the data in CSV or Excel format with one row per venue and clearly labeled columns. I’m happy to answer any structural questions about the site up-front and will consider the job complete when the file imports without errors and sample checks match what’s live on the page.
Hello! :) I'm seeking a versatile virtual assistant to join my team for 15+ hours per week (minimum). The role involves a mix of marketing (90%) tasks (blog, content, social, reddit, linkbuilding) and admin-related support (10%) tasks (landing pages setup, research etc). The ideal candidate should be skilled in traffic generation tasks: SEO/GEO, Reddit, blog content, and social media management as the key task is to help setup offer pages (landing pages) and drive traffic to them (organic traffic). Your success in this role will be determined by your ability to generate traffic for the projects you're assigned to over a 3-month probation period. After which your hourly rate will increase and your contract will be extended for a further 9-12 month contract. If you perform exc...
I need a rock-solid n8n workflow that, whenever I trigger it, navigates through selected e-commerce sites and public business directories, captures every piece of business information that is publicly available, and stores it in a clean, query-ready format. The data I care about includes the business name, category or type, “about” text, founders’ names, any additional corporate details the site reveals, plus all images properly downloaded and tagged. I will be running various data-analysis models on the output, so accuracy, consistency, and tidy structuring are non-negotiable. The flow must: • Accept a list of target URLs and run on demand (no fixed schedule). • Respect and site rate limits while still remaining efficient. • Handle pagination, lazy-...
I am building an AI-driven workflow that takes products from Amazon and publishes them on eBay automatically. The core of the job is a Chrome-based extension tool that behaves like a real seller listing a real item while staying invisible to the marketplaces’ anti-bot systems. Here is what the first milestone needs to achieve: • Seamlessly capture product data on Amazon and generate a ready-to-publish eBay listing, complete with descriptions, high-resolution images, category and tag placement, plus accurate pricing and shipping details. • Provide a clean UI where I can fine-tune the listing before it goes live. • Run inside a real Chrome instance (not a headless fallback) and randomise fingerprints, timing and mouse events so it passes browser-integrity checks....
I need a Python-based solution that automatically gathers companies and shareholders data, pulls supplementary details via external APIs, and outputs a clean, unified dataset I can query at any time. Scope of the scrape • Sources: company websites, financial databases and relevant public records. • Website focus: company profiles, turnover figures and any available Demat / share-holding particulars. What the tool should do 1. Crawl or call the above sources, respecting and rate limits. 2. Parse the required fields, normalise names and IDs, then enrich each record through one or more APIs (for example OpenCorporates, Clearbit or any better suggestion you have). 3. Store results in a structured format (CSV plus an SQLite or Postgres option). 4. Offer a simple comma...
I'm seeking a Python RPA expert specializing in computer vision-based (Offline) web scraping for a web search and document download project. You'll scrape sites, download/classify documents (e.g., public records via CV/NLP), design neural networks for extraction, and build scalable workflows. Patience for alpha testing, finetuning iterations is key. Key Requirements: • Pure Python RPA, with the core orchestrator, no third part tool. • Web navigation/ scraping with Selenium/Playwright: document download, classification, OCR/text extraction. • Build/train neural networks (e.g., CNNs for image doc classification). • NLP expertise with spaCy for entity extraction. • Computer vision using TensorFlow/OpenCV (offline Vision Libraries preferre...
Looking for an OpenBullet config engineer to build a config for a social media site. Please apply only if you have experience building OpenBullet configs and describe a couple of your projects built.
I need a reliable, repeatable script that automatically pulls historical and fresh match-result data for the Premier League, La Liga, Serie A, the English Championship and the Bundesliga 1. The workflow should: • visit publicly available sources you identify (official league sites, APIs, or reputable statistics portals), • extract the full-time score, date, home/away sides, venue and any metadata you can pick up (round, referee, attendance), • extract data on goals including the exact time and goalscorer • additional data extracted from match Commentary would be helpful, i.e. substitutions, shots on goal, shots off target, etc. with times will help • normalise club names so they are consistent across all leagues, and • write everything into a single, tidy...
I need a complete, current data set for every car dealership operating in Milwaukee, Wisconsin, pulled directly from each dealership’s own website. For every dealership I want a phone number and a working email address; All results must be delivered into a single Google Sheet that I will share with you at the start of the project. I only want information gathered from official dealership sites—no third-party listings or directories. Please keep requests polite enough to avoid rate-limiting and respect any guidance. Deliverables • Google Sheet tab 1: one row per dealership with name, phone, email, and website URL • Google Sheet tab 2: complete inventory list with dealership name, make, model, year, and the source link Acceptance is based on a spot-check of ...
R Expert Needed: Advanced Signal Processing for Maternal, Fetal & Neonatal Health We are seeking a senior R scientist with deep experience in audio / seismic data, wavelet methods, and Bayesian modeling to help demonstrate that R is just as good as Python at surfacing early physiological health markers. This role is for someone comfortable working below the noise floor—where the signal of interest is subtle, nonstationary, and embedded in the mechanics of living systems. What you’ll work on 1. Signal conditioning & noise removal * Design and evaluate signal conditioning pipelines for low-amplitude physiological data * Implement filtering strategies to remove mains contamination (50/60 Hz and harmonics), including: o Notch / comb filters o Adaptive and time-varying filt...
I have roughly one thousand JPEG images named. Each filename matches a numbered row already laid out in my Excel/Numbers workbook, and every image needs to land in the specific cell that carries its number. I also need every photo resized so it fits cleanly inside its cell without overflowing. I will pass you the folder of JPEGs plus the workbook. You return the same file, now populated with the images in their correct positions and neatly scaled. Accuracy is critical here, so please be comfortable with bulk image insertion and cell-sized resizing in either Excel or Apple Numbers—whichever you prefer as long as the final file opens perfectly on both Mac and Windows. Deliverables: • Updated workbook with all 1,000 images placed in their matching cells • Images resized to...
I have a detailed set of internal criteria and need an experienced researcher to apply it to a large pool of informational websites. The goal is simple: surface only the sites that truly fit my content-curation needs.
I will send you a single Excel workbook that already contains a clean template and the full list of 500 company names. Your task is to visit each company’s official website and capture three fields—contact name, email address, and phone number—then enter them in the exact columns provided. Please work strictly with information that is publicly available on the company site; do not pull data from LinkedIn or any third-party source. Consistency matters, so enter phone numbers in international format where it is shown and keep the spelling and punctuation of names exactly as they appear online. Deliverable • The same Excel file returned, fully populated and double-checked for accuracy and spelling. I will review the sheet for completeness (500 rows, three data poin...
I need product details captured from a set of websites and delivered in a clean, structured format I can load straight into Excel or a database. The job involves visiting the URLs I provide, pulling every product’s name, price, SKU, description, and any other specifications that appear on the page, then handing everything back to me in a .csv or similar flat file. A lightweight script—Python with BeautifulSoup, Scrapy, or a comparable tool—would be ideal so I can rerun the extraction whenever the catalogue changes, but I’m happy to discuss whether you deliver only the compiled dataset or include the code as well. Please keep the workflow ethical (no site overload, respect where applicable) and ensure the final data set is complete, deduplicated, and readable wi...
Hello! :) I'm seeking a versatile virtual assistant to join my team for 15+ hours per week (minimum). The role involves a mix of marketing and admin-related support tasks (content, landing pages, research etc). The ideal candidate should be skilled in traffic generation tasks: SEO/GEO, Reddit, blog content, and social media management as the key task is to help setup offer pages (landing pages) and drive traffic to them (organic traffic). Your success in this role will be determined by your ability to generate traffic for the projects you're assigned to over a 3-month probation period. After which your hourly rate will increase and your contract will be extended for a further 9-12-month contract. If you perform exceptionally (above traffic targets), you may be offered a reve...
I need an experienced developer to build a fully automated affiliate marketing intelligence system that discovers trending products, generates platform-optimized content, and publishes across 5 social media platforms with embedded affiliate links — all running on autopilot via N8N workflows. This is a complete end-to-end system, not a simple automation. Read the full spec before bidding. System Modules Required Module 1: Trend Discovery Engine Scrape Google Trends via pytrends for rising product keywords (runs every 6 hours) Cross-reference with Amazon Best Sellers to validate commercial intent Score and rank products by trend velocity, search volume, and commission potential Store ranked product queue in PostgreSQL database Module 2: Affiliate Data Collector Connect to Amazon Pr...
Regular comprehensive snapshot. There are 3,000 products. 20 columns for each product. Page by page. I’m looking for a repeatable, fully automated workflow. A Python-based stack (Scrapy, BeautifulSoup, Selenium, Playwright, or an equivalent you prefer). Robustness is key: the crawler should cope with pagination, JavaScript-rendered. Clear, well-commented code is part of the deliverable so my team can review and rerun it internally. Each quarterly hand-off must include: • Cleaned CSV or JSON containing the structured product records • The raw HTML or a compressed WARC snapshot for auditing • The executable script(s) plus a brief change log highlighting any site-structure updates you handled Please outline your proposed tool chain, an example of a large scrape yo...
I’m looking to start a Python-based project purely for Personal use and I’m intentionally keeping the brief open so creative developers can pitch ideas that excite both of us. Whether it’s a handy automation script, a data-driven dashboard, a lightweight Flask or Django web app, a web-scraping utility, or even a small game, I’m happy to explore any direction—as long as it showcases clean, well-documented Python code. Because I do not have a strict deadline (No time limit), I prefer quality over speed. Take the time to think through the concept, architecture, and tech stack; then send me a Detailed project proposal that explains: • The core idea and its personal value • Key Python libraries or frameworks you plan to use (e.g., Pandas, Selenium, Fa...
**Title:** Data Research – Collect Email Addresses of Local Businesses (South London Postcodes) **Description:** I’m looking for a freelancer to build a list of local businesses located in the following London postcodes: SE22 SE23 SE24 SE25 SE26 SE27 SE28 The task is to identify **shops/businesses that do NOT have a website** and collect their **publicly available email addresses**. **Requirements:** For each business, please provide the following in a spreadsheet (Excel or Google Sheets): * Business Name * Email Address * Business Address * Postcode * Phone Number (if available) * Business Category (e.g., barber, café, convenience store, etc.) **Important:** * Only include businesses **without their own website**. * Email addresses must be **publicly available*...
hi, need someone to scrape some data. I need a google sheet with the name, email, phone and website of every company listed here https://www.freizeitmesse.de/ausstellerverzeichnis/#/suche/f=h-entity_orga;v_sg=0;v_fg=0;v_fpa=FUTURE Should be around 500 entries. Please share your rate and timeframe.
Hello! :) I'm seeking a versatile virtual assistant to join my team for 15+ hours per week (minimum). The role involves a mix of marketing and admin-related support tasks (content, landing pages, research etc). The ideal candidate should be skilled in traffic generation tasks: SEO/GEO, Reddit, blog content, and social media management as the key task is to help setup offer pages (landing pages) and drive traffic to them (organic traffic). Your success in this role will be determined by your ability to generate traffic for the projects you're assigned to over a 3-month probation period. After which your hourly rate will increase and your contract will be extended for a further 9-12-month contract. If you perform exceptionally (above traffic targets), you may be offered a reve...
I’m running detailed market research and need a clean, verifiable list of U.S.-based Shopify stores that actively use Shopify Fraud Protection. The focus is on small-to-mid tier shops, capped at roughly 400 000 visits per month, so the big names are out. Please leave out grocery, clothing, perfume, and pet stores; categories such as electronics, home & garden, beauty & health—or any other niche that isn’t excluded—are welcome. I’m flexible on how you gather the data: scraping, APIs, or well-documented manual checks are all fine as long as the results are accurate. Deliverables • Spreadsheet (Excel or Google Sheets) listing: – Store URL and brand name – Estimated monthly traffic figure and data source (Similarweb, BuiltWit...
Project Description: We are looking for a freelancer to help with a data research and filtering task using LinkedIn Sales Navigator and some online tools. Workflow: 1. I will provide LinkedIn Sales Navigator filter criteria to find companies. 2. Using those filters, you need to extract the list of companies from Sales Navigator. 3. For each company, collect: - Company name - LinkedIn company page - Website domain 4. Next, check the MX records of each website using the MX checker link that I will provide. 5. If the domain uses Google Workspace (Google MX records) → proceed to the next step. 6. Then check the BIMI record using the BIMI checker link I will provide. 7. If: - MX = Google Workspace - BIMI = Not enabled → Then collect: - CEO LinkedIn pr...
I have a curated list of specific company websites and I need an automated solution that extracts complete contact information from each one. The goal is to turn every URL into a clean, ready-to-use lead. WEBSITE : The scraper should capture: • Email addresses • Phone numbers • Mailing addresses • LinkedIn profile link • Location (city / state / country) • First and last name • Occupation / job title • Company name • Company website A well-structured CSV or Excel file is the preferred output, with each field in its own column. I am comfortable with your choice of tech—Python with BeautifulSoup, Scrapy, or Selenium are all fine—as long as the script runs reliably and respects and rate limits where required. Ac...
Project Title: RPA Automation for RTO Document Upload (Excel to Web) Project Description: I need a robust automation (RPA) solution, preferably using Power Automate Desktop (PAD), to automate the document uploading process on an RTO (Regional Transport Office) portal. The bot needs to handle data from an Excel sheet, match it with local PC folders, and upload documents to the website. Detailed Workflow: 1. Data Input (Excel Integration): The bot should read an Excel file containing two primary columns: Vehicle Number and Chassis Number. It must iterate through each row one by one. 2. Web Navigation & Search: Open the RTO portal and log in (if required). Input the Vehicle Number and Chassis Number from Excel into the website's search fields to fetch the specific vehicle's deta...
We’re looking for someone experienced in setting up and stabilizing large batches of TikTok accounts targeting the US market. The goal is to properly configure the environment so accounts look legitimate and can safely scale activity. This is not beginner work. You should already understand device/IP hygiene, account aging, and how TikTok detects suspicious behavior. What you’ll be responsible for: • Setting up multiple TikTok accounts for US usage • Configuring clean IP environments (residential/mobile preferred) • Ensuring each account has a unique fingerprint • Proper account warm-up strategy (activity simulation, browsing, engagement) • Avoiding bans, shadow bans, or verification loops • Creating a repeatable setup process for scaling accoun...
I need a small, always-on scraper that keeps an eye on a popular second-hand marketplace and alerts me the moment any Electronics listing matching my keywords appears. My priority is speed—ideally I hear about a new post within seconds, certainly no longer than a minute after it goes live. Here’s what the script must do: • Crawl the marketplace continuously without being blocked, parse every new listing, and filter it against a configurable set of electronics keywords. • Extract and store the Price and Condition fields so I can track changes and avoid duplicates. • Push an instant notification (email, SMS, or Slack—whichever you prefer to wire up) each time a fresh match is found. I’m comfortable with a Python 3 stack—think Requests/...
Job Description I am looking for a technically proficient developer with a background in Computer Science and Local SEO to build and manage a Mobile Mechanic Lead-Gen Engine. Your primary responsibility is to create and scale unique digital entities for mobile car repair services, ensuring each location dominates the Google Maps 3-Pack through automated "Trust Signals." Project Scope & Compensation • Phase 1 (Trial): Management of 3 initial locations at $300/month. • Phase 2 (Growth): Upon reaching Day 50 performance metrics, scaling to 10 locations per developer at $1,000/month. • Payment Trigger: Success is measured by the completion of the 50-day roadmap and the achievement of "Human Dialogue" metrics (verified customer interactions). Key Responsib...
I need a search engine that connects to my Excel sheet for efficient data retrieval. The sheet primarily contains text data across 1-5 columns. Key requirements: - Ability to perform exact match searches - Simple and user-friendly interface Ideal skills and experience: - Experience with search engine development - Proficiency in handling Excel data - Strong programming skills, preferably in Python or similar languages Looking forward to your bids!
Project Title: WhatsApp to Web Portal Automation (Python) - Multi-Recharge Distributor Project Description: I am looking for a developer to automate a repetitive task for my multi-recharge business. I am a distributor for a portal () and I currently manage retailer balance transfers manually via WhatsApp. Current Workflow: Retailers send a payment screenshot and a message via WhatsApp (Format: PAY [ID] [Amount]). I manually log in to the web portal or mobile app. I enter the Retailer ID and the Amount to transfer the wallet balance. I do not verify screenshots instantly; I manually verify bank statements at night. What I Need: I need a "Robot" or an automation script (using Python Selenium ) that can: Trigger: Read incoming WhatsApp notifications. Extract Data: Automatica...
READ FULLY BEFORE BIDDING. Bids that ask questions already answered here will be rejected. Bids over $1,500 USD will be rejected automatically. Only developers who have previously built similar portal automation systems will be considered. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PROJECT OVERVIEW ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ We need a Python-based multi-client automation system that monitors a government visa appointment portal (Turkey), detects available slots in real time, and completes the reservation process automatically on behalf of multiple applicants. The system manages a queue of applicants, runs configurable parallel sessions, and operates 24/7 as a background service. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ TARGET PORTALS — Phase 1 (5 countries) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ...
Title Build Comprehensive Global Movie & TV Metadata Database for Recommendation Engine Project Overview I am building a personal recommendation engine that predicts my rating for movies and TV shows based on a large history of titles I have already rated. To support accurate predictions, I need a global media metadata database that contains rich structured information for movies and TV series. The dataset should combine multiple trusted sources and be designed for machine-learning comparison against my rating history. Scope of Work Build a master dataset containing global movie and TV metadata. This will serve as the candidate pool for prediction models. The database should include titles from: • IMDb official dataset • The Movie Database API • Optional enri...
PROJECT TITLE Web Scraping Developer for Global Legal & Regulatory Data Collection PROJECT OVERVIEW We are looking for a developer who can build an automated system to collect legal and regulatory documents from multiple global sources. The goal is to create a scalable automated pipeline that can gather legal data across multiple jurisdictions and regulatory domains. DATA COLLECTION SCOPE The system will collect information related to: - Medical law and healthcare regulation - Medical advertising regulation - Corporate formation and company governance laws - Investment regulation (stocks, cryptocurrency, real estate) - Tax law and administrative tax rulings - Beauty and cosmetic regulation - Medical and cosmetic manufacturing compliance - Import and export law - Customs and tariff...
More details: De unde vor proveni informațiile despre produse? Din toate 3 sursele de mai sus What information should successful freelancers include in their application? This question was skipped by the user How soon do you need your project completed? ASAP Prefer pe vb de limba aromana pentru acest proiect.
Each week I have a narrow booking window on our club’s Northstar Technologies app and I need that tee time grabbed the instant the system opens. The script must log in automatically, choose one of our three courses based on a simple setting I can change each week, then secure a slot that falls anywhere inside a time range I provide. Your solution has to run reliably on Windows and cope gracefully with captchas, latency, or the app’s occasional hiccups. As long as it is rock-solid I’m flexible on the language and tooling you use, but I do want clear instructions for installing and scheduling it so the task fires off at the exact minute I specify. Deliverables • Source code and any auxiliary files • A short README explaining setup, configuration for cour...
I need an AI-driven pipeline that takes a topic idea and automatically does everything that follows: it finds the most-searched Google keywords, retrieves the top 10 URLs for each term, scrapes the images and other media assets from those pages, adapts or re-edits the visuals to match my branding guidelines, then publishes the finished piece to my website while triggering the appropriate lead-capture sequence. All three stages—keyword research and analysis, scraping plus in-house editing, and final posting with lead automation—must run without manual intervention. A multi-agent architecture is ideal, so feel free to leverage LangChain, CrewAI, AutoGPT, or any comparable framework that lets independent agents pass tasks between one another. Think of one agent focused on Google ...
We are building a cloud-based cross-listing engine for resellers (eBay, Poshmark, Mercari, Vinted, etc.). What you’ll be doing Architecting a headless browser fleet that scales horizontally. Implementing "human-like" behavioral simulation. Building a Local Proxy Tunneling system. Building a robust "Session Mirroring" engine that captures, encrypts, and replays storage states (cookies/localStorage) from our extension to our cloud workers. You should be able to explain the difference between a JA3 fingerprint and a Canvas fingerprint. Experience with serverless execution (Azure Functions) and container orchestration. You’ve spent time in Chrome DevTools looking at how sites track "Device IDs" or handle rolling tokens. To prove you actually...
I need an experienced Python developer to build a commercial multi-client visa appointment automation system for Turkey-based applicants. Full source code ownership is required upon delivery. --- PROJECT BACKGROUND I run a visa consultancy service in Turkey. My clients need visa appointments from VFS Global portals. The current manual process is too slow and I need a fully automated, scalable system that handles 50-100+ clients simultaneously. --- TARGET PORTALS Source: Turkey () Target countries (minimum 6-7): - United Kingdom - Germany - France - Netherlands - Italy - Spain - Sweden System must be modular so new countries can be added later. --- CORE FEATURES REQUIRED [1] MULTI-CLIENT MANAGEMENT DASHBOARD - Register and manage 100+ client records - Per client: Full name, Passpo...
I need a Node JS service that pulls every piece of live information available on and its Android app (). The feed must refresh virtually every second, covering scores and statistics, detailed player profiles, and full match schedules. Your code should expose the data through robust JSON endpoints while respecting the target platform’s structure. Alongside the scraper, build a lightweight admin panel where I can: • Whitelist or block consuming domains / mobile apps in real time • View scrape and API-access logs at a glance • Add, edit or disable user accounts, assigning granular access rights Please design the solution so it can scale gracefully, use common Node JS tooling (TypeScript, Express, Puppeteer, Cheerio, or similar), and keep response latency low enoug...
I’m looking for a single application that can scan—and when possible automatically secure—usernames across several social platforms. The must-cover list is Instagram, TikTok and Snapchat, with bonus points if you can also include Gunslol, Discord and any other networks you already know how to tap into. Core workflow • I enter a series of prefixes or exact names, choose whether I want 2, 3 or 4-character checks, then hit start. • Your script pings each platform’s availability endpoint or page, handles any “Are you a robot?” challenge in the background, and reports back instantly. • The moment a name is free, the tool registers it with a pre-configured email account so the handle is locked in before anyone else grabs it. Notificati...
Project Overview I am seeking an expert Magento 2 Developer with strong data extraction (scraping) capabilities to build out a comprehensive medical supply catalog. The previous developer started a sample set but is no longer on the project. I need a professional to scrape high-volume product data and perform a structured, complex import into my Magento 2 staging site. Scope of Work 1. Data Scraping & Extraction Source: Target website (medical supply industry) to be provided. Requirements: Full extraction of Product Titles, High-Res Images, Long/Short Descriptions, SKUs, and Technical Specifications. Variation Mapping: Correctly identify and link "Simple" products to their "Configurable" parents (e.g., mapping different sizes or packaging options to a single prod...
Data Sniper B2B (República Dominicana) - Cacería de Vacantes y Números Telefónicos A MISIÓN EXACTA (QUÉ VAS A BUSCAR): No busco listas estáticas. Tu única misión es rastrear empresas en República Dominicana que tengan POSICIONES / VACANTES ABIERTAS (preferiblemente mandos medios y gerencias que lleven días estancadas). La vacante abierta es el único disparador válido para extraer al prospecto. EL ENTREGABLE DIARIO (80 Prospectos): Entregarás la data en nuestra matriz de 12 campos. ATENCIÓN: EL TELÉFONO ES EL REY. El correo electrónico es secundario. Si me entregas un prospecto con un correo perfecto pero sin número de teléfono válido para contactar a...
Contact selection criteria: Upholstered furniture manufacturing Area: Romania, Bulgaria, Serbia Data scope: Company name 100% Country 100% Telephone 70% Email 100% Industry 100% Turnover (for available companies) Number of contacts: 1,100
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.