
Tech giants like to boast about trillion-parameter AI designs that require enormous and expensive GPU clusters.
But Fastino is taking a various approach.The Palo Alto-based startup states it has invented a new kind of AI design architecture thats deliberately small and task-specific.
The designs are so small theyre trained with low-end gaming GPUs worth less than $100,000 in total, Fastino says.The technique is bring in attention.
Fastino has actually secured $17.5 million in seed funding led by Khosla Ventures, notoriously OpenAIs very first venture financier, Fastino solely tells A Technology NewsRoom.This brings the start-ups total moneying to almost $25 million.
It raised $7 million last November in a pre-seed round led by Microsofts VC arm M12 and Insight Partners.Our models are quicker, more accurate, and cost a portion to train while outshining flagship models on specific tasks, says Ash Lewis, Fastinos CEO and co-founder.
Fastino has actually built a suite of small designs that it sells to business consumers.
Each model focuses on a particular task a business might need, like editing delicate data or summarizing corporate documents.Fastino isnt divulging early metrics or users yet, however says its performance is wowing early users.
June 5BOOK NOWIts still a bit early to tell if Fastinos method will catch on.
The enterprise AI space is crowded, with business like Cohere and Databricks also promoting AI that stands out at certain tasks.
And the enterprise-focused SATA model makers, including Anthropic and Mistral, likewise provide small designs.
Its likewise no secret that the future of generative AI for business is most likely in smaller sized, more concentrated language models.Time may tell, however an early vote of self-confidence from Khosla definitely doesnt harmed.
In the meantime, Fastino says its focused on building an innovative AI team.
Its targeting researchers at leading AI labs who arent obsessed with developing the greatest design or beating the benchmarks.Our hiring technique is very much concentrated on scientists that possibly have a contrarian thought process to how language models are being built today, Lewis says.