The history of generative AI begins with a betrayal.
OpenAI, meant to give developers a high and rising model for AI development, went closed source and got into bed with Microsoft. Now Microsoft is leading the charge to regulate AI, raising costs for competitors and creating closed source lock-in. The company wants licensing for all AI, bureaucratic delays before anyone or anything can compete with its proprietary offerings.
The current listing for the Google AI page claims it offers an “ecosystem of open-source tools, datasets, APIs and more.” But open source is never mentioned on the page itself. Google, like Microsoft, now thinks AI is too dangerous to be put into the community’s hands.
If it’s that dangerous, guys, then ban it. Forbid machine learning, forbid progress. If code exists, someone will try to misuse it. That’s part of the law of code. The only answer is for code not to exist.
Meta, the artists formerly known as Facebook, claims its code is open. But is it, really? Ever since Oracle bought Sun in 2010 and closed off the movement’s crown jewels to kill Java, mySQL and Solaris, I have been leery of calling any project managed by a private company open source. The reasons should now be obvious.
Right now, AI code is offered along a gradient of licenses that you better read before you put your fingers on a keyboard. Supposedly these models are catching up with OpenAI and what Google is doing because, as everyone has learned in the last quarter century, open source puts more hands on the code and lets everyone build from a rising platform.
I’ll believe the promises of a company like Stable Diffusion only after their code is donated to something like Eclipse or Apache, which can guarantee the code will remain free. This needs to happen soon, because the dance between government and “big tech,” meant to keep code proprietary and secret, is speeding up.
If they have their way, the future will be corporate, government controlled, and anything but free.