We’re looking for someone who is excited about the chance to build infrastructure for other deep learning researchers (ex: experiment tracking tools, model debugging methods, automated hyperparameter optimzers, developer tooling, etc).
Unique aspects of this role
- Focus on coding and nothing else. No expectation that you attend standup, spend time doing sprint planning, or any of that other stuff. We’ll take care of creating the specs and just talk through them with you to ensure there are no questions, and you’ll be free to focus on coding.
- Work 9 to 5. Go have a life outside of work! :) Because the role is focused on coding, we don’t expect you to be putting in crazy hours.
- Work with other great programmers who care about their craft. Ex: even our deep learning code has tests (though we’re not dogmatic—they’re only tested to the extent that tests are useful). We have deterministic formatting, zero linter errors, wide type coverage with MyPy, etc.
- Get paid above market. Our hiring philosophy is that we’d rather have 1 great engineer than 2 mediocre engineers, so we’re willing to pay more than average (and unlike most startups, that means cash, not equity).
- Work with a team that understands remote work. Some of us have been working remotely for decades now, and we understand how to make a good environment for remote work.
- You must be a great software engineer who enjoys building systems that support other engineers. We’re flexible on the exact number of years of experience, but this likely is not a great fit for those with fewer than 5 years of post-college work experience (or the equivalent—we love people who are self-taught).
- You must have significant prior experience with machine learning, ideally having worked with PyTorch on non-trivial projects.
- You must be very comfortable writing Python.
- You should be comfortable with bash, linux system internals, etc. Ex: knowing what bpftrace is and how to use it, or how to install debugging symbols on a stock system (and why that's useful).
- Create world-class deep learning research infrastructure and tooling. Ex: we made a simple hyperparameter optimizer that allows us to tune models without spending any brain cycles or tons of compute. We love tools that free us up to work at a higher level.
- Build new features that make experimentation easier. Ex: automated checks for vanishing or exploding gradients and other obvious problems.
- Open source the best parts of our internal tooling and maintain our existing repositories. Ex: Jupyter Ascending, a tool we made that allows you to easily write code in a real editor, like vim or emacs, and instantly sync that into a Jupyter notebook, giving sort of the best of both worlds. Fixing bugs and resolving issues for the broader software community is important to us.
- Contribute patches to fix bugs in other open source repositories.
- Make our experiment infrastructure more robust. Ex: better handling for machines being killed in a distributed setup, or enabling experiments to be run easily on multiple cloud providers or even a local machine, etc.
Generally Intelligent is an early-stage AI research company. We’re working directly on building human-level general machine intelligence that can learn naturally in the that way humans do. Our mission is to understand the fundamentals of learning and build safe, humane machine intelligence.
We’re supported by investors that include Y Combinator, researchers from OpenAI, the founders of Dropbox, Lightspeed Venture Partners, and Threshold Ventures (formerly DFJ).