Que tal demonstrar interesse em participar do processo seletivo?
● Administration/maintenance of ElasticSearch servers;
● Administration/maintenance of Redis and Memcached servers;
● Design, implement, test, deploy, and maintain stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new sources of data into our data warehouses;
● Produce scalable, replicable code and engineering solution that help automate repetitive data management tasks;
● Help other Data Engineering staff troubleshoot their SQL, Python or other code;
● Work in Cloud environments (AWS, GCP);
● Train other DE staff on these skills.
● Strong mastery on relational databases and SQL. Extract, Transform, and Load (ETL);
● Proficiency in Python or other object-oriented/object function scripting language like Scala, Java, C++, specially for data manipulation and analysis and ability to build, maintain and deploy processes with these tools;.
● Strong experience with big data tools, as Kafka, Hadoop, Spark, etc.;
● Strong experience with data pipeline and workflow management tools as Airflow, Azkaban, Luigi, etc.;
● Have installed/configured/administered at least 2 of the following databases: Redshift, DynamoDB, Redis, Memcached, MongoDB, Cassandra;
● Differentiate and exemplify the different market engines: column-based (ex. HBase), document (MongoDB), key-value (ex.: Redis), graph (ex.: Neo4J), MultiModel (CouchBase)
● Data Lake x Data Warehouse; Hadoop Ecosystem (interconnectivity from the tools);
● Basic experience with git for version control and Docker/docker-compose;
● Postgres RDS; MySQL RDS; Aurora RDS;
● Desirable knowledge in another On Premises or Cloud DBMS (MySQL, Cassandra, MongoDB or any other NoSQL);
Que tal demonstrar interesse nessa vaga?
Encontramos 42 oportunidades para desenvolvedores Data Science . Cadastre-se na GeekHunter e receba propostas de emprego com salário!Criar Perfil para Demonstrar Interesse