OverviewWe’re looking for an experienced data engineer or analytics specialist to help us bring multiple existing data sources together into a single, well-structured Amazon Redshift environment.The goal is to set up a reliable data foundation so our team can run analytics, reporting, and future AI models from one clean source of truth.⸻Data Sources to IntegrateYou’ll be connecting and consolidating data from:•Aurora (MySQL) – transactions, users•HelpScout – customer service tickets•Mailchimp – email campaigns & engagement•Meta Ads (Facebook Marketing API) – campaign performance, spend, ROAS•Google Analytics 4 (GA4) – web sessions & conversionsAll data should flow into Amazon Redshift Serverless, using S3 as the raw landing zone.⸻Deliverables1.Data Warehouse Setup•Configure Redshift + S3 structure (raw, staging, analytics schemas)•Set up secure, automated access and permissions2.ETL / ELT Pipelines•Build or configure ingestion pipelines (Airbyte, Fivetran, or custom Python/Lambda)•Create transformations (SQL / dbt) for clean, analytics-ready tables3.Data Modeling•Standardize user IDs and timestamps across all sources•Produce core joined tables (users, orders, engagement, campaigns, support)4.Documentation•Schema diagram, data dictionary, and connection details•Clear handover instructions for internal use5.Validation•Sample dashboards or queries to confirm the data joins correctly⸻Required Skills•Strong experience with AWS Redshift, S3, and SQL•Hands-on with ETL tools (Airbyte, Fivetran, Stitch, or custom scripts)•Familiarity with API integrations (Mailchimp, Meta Ads, GA4, HelpScout)•Knowledge of dbt or similar for transformation and testing•Good communication and documentation skills⸻Nice to Have•Experience with AWS Glue / Lambda / Step Functions•Understanding of marketing analytics (attribution, ROAS, LTV)⸻Project Details•Location: Remote (UK / EU time zone preferred)•Start: Immediate•Duration: Approx. 4–6 weeks, with potential for follow-up work⸻To ApplyPlease include:•Short summary of similar AWS/Redshift projects•Preferred ETL tool or stack•Example of a data model or architecture you’ve built (no sensitive info)⸻✅ Objective:Deliver a working Redshift data warehouse with automated pipelines, consolidated datasets, and clear documentation — ready for analytics and AI use.