1 Comment
Nov 5, 2023Liked by Ed Sim

I think we are going to see a shift to running more workloads locally on laptops in the workplace. Apple has been way out in front here, with the M2 processors, which are well equipped to run machine learning models. There are big advantages to running the AI locally. First, its more secure. Admins can push out models to users based on LDAP group using the endpoint management software. Different employees can get different AI models based on need and authorization. Second, this will be cheaper. Since inference is most of the cost of running AI, why not take advantage of hardware you're already purchased, rather than the pay as you go model with GPU's in the cloud? Third, the experience is better because you don't have the latency of pinging every request up to the cloud and back. I work in AI, and for development we train and run models from repositories like HuggingFace locally on our Macbooks al the time. For training our largest models, though, the cloud still makes sense. These simply require too much data to store locally. You need some kind of big data platform, even if its just S3, to process and store the huge quantities of data. It will be really interesting to see how IDE's involve given these constraints - you want to do as much locally as possible, but all the big data processing tasks still belong in the cloud.

Expand full comment