This article describes responsibilities and requirements for developing and maintaining EA’s unified Big Data pipeline for live services. It covers collaboration with cross-functional teams, evaluation of integrations between live service, studio and vendor solutions, handling large datasets from 20+ game studios to enable data-driven decision-making and experimentation, and engagement with stakeholders including Legal and Privacy for compliant delivery.
Pipeline responsibilities and operational focus
Develop and maintain EA’s unified Big Data pipeline for live services, ensuring the pipeline supports live operations and experimentation. Work with large datasets from 20+ game studios to enable data-driven decision-making and experimentation. Streamline workflows that connect data ingestion, processing, and delivery for live-service features.
Cross-functional collaboration and integrations
Collaborate with multiple teams to align pipeline capabilities with product needs and studio technology:
- Content Management & Delivery
- Messaging
- Segmentation
- Recommendation
- Experimentation
Evaluate integrations between live service solutions, studio tech stacks, and vendor solutions. Engage with game studios, product managers, Legal and Privacy to deliver end-to-end compliant solutions.
Data scale, experimentation, and compliance
The pipeline must support large-scale datasets and experimentation across many studios. Workstreams include enabling data-driven decision-making, supporting experimentation frameworks, and ensuring solutions meet Legal and Privacy requirements for end-to-end compliance.
Technical requirements and tools
Required qualifications and technical skills include:
- Education: Bachelor or Master in Computer Science or related field
- Experience: 1-2 years relevant industry experience
- Core skills: strong computer science fundamentals
- Languages and frameworks: proficiency in Java; front-end skills (HTML/CSS/JavaScript, preferably React); familiarity with back-end frameworks (e.g., Spring Boot)
- Cloud and pipelines: multi-cloud data pipeline experience (preferably AWS)
- Databases: experience with columnar, relational, and document databases
- Infrastructure and observability: Docker/Kubernetes, Prometheus, Grafana
- CI/CD and tooling: GitLab CI/CD
- Soft skills: strong communication abilities
In summary, the role focuses on building and maintaining a unified Big Data pipeline for live services, collaborating with multiple teams and stakeholders, working with large datasets from 20+ game studios to enable experimentation, and meeting technical and compliance requirements as specified.









