we are building a new type of computer
to enable the future of machine intelligence
Why Fathom
The only example of human-level intelligence is the human brain, which has ~125 trillion synapses. This is orders of magnitude more than today’s largest artificial neural networks. Fathom Radiant was founded to bridge this gap.
The limitation is the interconnect technology of traditional electronic computers - put simply, the challenge is moving bits around. By combining the complementary strengths of optics and electronics, we created a revolutionary interconnect fabric that is low latency, high bandwidth, and low power.
The result is a single machine with a network capacity of a supercomputer, which enables programming flexibility and unprecedented scaling to models that are far larger than anything yet conceived. This will allow rapid iteration and accelerate the development of machine intelligence that will advance our society unlike anything that has come before.
Technology Brain and Nests
Latest from Fathom
Deep Learning with Trillions of Parameters: The Interconnect Challenge
There is huge economic value in trillion parameter models, but one needs a large amount of hardware to train and deploy them. Scaling deep learning to more hardware can be done in many ways and the only one that can scale to trillions of parameters in production - model parallelism - requires a much faster interconnect technology than is available today. In this post we will cover what is takes to train the biggest state of the art models today, walk through some estimates on what it takes to train an even larger - a trillion parameter model. And point to the scaling bottleneck and why we need a new kind of interconnect technology to deploy a brain-scale neural net.
8 Feb 2021
Arrow Image
Arrow Top Image
Fathom Radiant