Empatica logo Empatica

Cloud Engineer

Creata il 30-06-2018
Location Milan

Descrizione

As a Cloud Engineer, you will work alongside all the engineering teams at Empatica to make sure our products and infrastructure are robust, fault-tolerant and monitored at all times. When you're around, other engineers will know they don't have to worry because you are optimizing their work behind the scenes.

We are building medical devices that are saving lives. As you can imagine, this is a one-of-a-kind job, that requires skill, responsibility, capability to deliver, and attention to detail: when people's lives are involved, sloppiness cannot be part of the equation. Being a Cloud Engineer at Empatica means working hard to deliver quality both to our customers and to your colleagues: your work will make people happy, and will ultimately save lives.

You will be involved in:

  • Building a scalable data analysis infrastructure for our data science team able to digest different physiological signals in real-time.
  • Designing, building, testing, and scaling our API infrastructure.
  • Improving and simplifying development and deployment processes across embedded, mobile, backend, frontend and data analysis engineering teams.
  • Managing build/deployment pipelines for continuous integration and continuous delivery to improve the quality and availability of products.
  • Working closely with developers, supporting them with better tools and chasing away their release-day nightmares.
  • Taking care of backup, redundancy, and disaster recovery.
  • Maintaining products once they are live by measuring and monitoring availability, latency, and overall system health.
  • Improving and maintaining our monitoring and alerting systems.
  • Working closely with product managers on product discovery, project progression and estimation, testing, deployment, and iterations.
  • Overseeing security practices applied to our infrastructure and helping run audits on it.
  • Making sure our infrastructure is HIPAA and FDA compliant.

Some technologies we currently use, and you'll certainly be exposed to:

  • Amazon Web Services (EC2, ECS, S3, Lambda, RDS, CloudFront, SQS, ...).
  • Docker, Kubernetes, Terraform.
  • MySQL, DynamoDB, Redis.
  • ELK stack.
  • Jenkins, Bitrise.
  • Git.

Why work at Empatica

You’ll have a real opportunity to improve lives around the world, as part of a tight-knit team who share knowledge and are eager to keep learning and pushing to create top-notch products that have a meaningful impact. If you jump on board, we can guarantee it won't be an easy ride, but it will be one of the most rewarding experiences in your career, one that will allow you to learn a lot and test your whole skill-set on multiple projects, which are already helping thousands of people worldwide.

Requirements

The ideal candidate for this position:

  • Has 3+ years of experience as Cloud/DevOps/Infrastructure Engineer.
  • Has an academic degree (BSc or MSc) in computer science or related field.
  • Is familiar with logging and monitoring technologies: you know what you need to do to feel in control.
  • Has proven experience in deploying, monitoring and troubleshooting large scale distributed systems.
  • Has knowledge of web servers (e.g. Nginx) and relational and NoSQL databases (e.g. MySQL, DynamoDB).
  • Has worked with server provisioning tools (e.g. Ansible) and containers (e.g. Docker).
  • Has experience with Infrastructure-as-Code (e.g. Terraform).
  • Has strong programming and scripting skills, i.e. Python, Go, Ruby.
  • Has knowledge of Continuous Integration systems for backend and mobile apps (e.g. Jenkins).
  • Has good knowledge of Git.
  • Consistently demonstrates accuracy, thoroughness and attention in your approach. You are always looking for ways to improve and promote quality, and leverage data/feedback to improve performance.
  • Is proficient in English (mother tongue is highly appreciated)

Plus:

  • Strong understanding of and affinity for machine learning and data mining.
  • Proven experience scaling to terabyte-size datasets and managing pipelines to process them.
  • Experience with Microservices and Serverless architectures.
  • Experience with Hadoop ecosystem tools (Spark/Kafka/HDFS/YARN...).
  • Built data-pipeline and ETL infrastructure to handle multiple data sources, medium to large sized datasets.
  • Familiarity with algorithms, data structures, and complexity analysis.
  • Scaling one or more systems/organizations.
  • Expert in building automated deployment pipelines.
  • Prior startup experience.

You are an ideal candidate for Empatica if you:

  • Know that life is too short to do petty things. You feel an obligation to work on something that will leave the world in a better place.
  • Discuss openly face to face: good ideas are more important than authority.
  • Try to question dogma and the status quo: you prefer merit and objectivity.
  • Don’t give up a challenge, and persevere to overcome setbacks. You are focused on action and results.
  • Think first about the success of our customers and team before your own interests.
  • Have fun and don’t take yourself too seriously.

Benefits

We expect a lot from you, but your efforts will be rewarded by great benefits:

  • Competitive salary
  • Top-notch equipment
  • Flexible work hours
  • Personal growth programs
  • Free healthy lunch
  • Amazing summer office in Sardinia
  • Gym membership
  • Free cookies, if you deserve them

Benefit

A Unique & Extraordinary Team

A Unique & E...

Competitive Salary

Competitive ...

Free Lunch & Cookies!

Free Lunch &...

Massages

Massages

Personal Development Support

Personal Dev...

POSIZIONE CHIUSA Candidati ora

Video