At data.world, you will:
- build Apache Spark code in Scala to drive a pipeline of graph based transformations.
- work closely with product, engineering, documentation and business stakeholders to ensure the delivery and improvement of the collector product.
- collaborate with a small, dedicated team.
- execute on a key area of the data.world platform.
- learn… constantly.
We’d love to see:
- a BS in technology or engineering field, or equivalent experience.
- 3+ years experience as an engineer.
- experience working with Apache Spark and with Scala or Java systems.
- strong computer science fundamentals - particularly algorithms, graph theory, and relational data (SQL) experience.
- experience with AWS (Amazon Web Services) will be a strong plus.
- strong opinions, loosely held. You admit when you're wrong, and integrate new learnings quickly.
- a craftsperson. You know your way around and take pride in your work.
- an appreciation of the user, even when you're building a CLI or API.
- familiarity with a variety of languages and libraries. You know which tools to use for which tasks.
- the ability to provide, as well as seek out, mentorship.
- passion for continuous integration, and test-driven engineering methodologies.
- strong written, verbal, and visual communication skills. You should be able to articulate your decisions, whiteboard new solutions, present ideas concisely, and defend your beliefs.
- an appetite to try new things. You’re curious and excited to improve your process, and always looking to learn. You ask questions and don't shy away from challenges.
Big pluses include:
- interest in the semantic web, RDF and/or graph based data storage technologies.
- experience with Docker.
- experience with Dropwizard, or other Java-based web framework.
experience working in a fast-paced, startup environment.