AI has been the most [ab]used term in recent times.
After the initial hype, the last few months have been more of a reality check and revised, more realistic expectations.
One has to admit that there are many case studies of companies successfully adopting AI based approaches and tools to gain significant improvements in productivity, reduction in operating costs etc.
But, similar gains have not been reported with the vibe coding wave!
// if you are not familiar with vibe coding, it is the practice of building software by describing requirements to an AI, rather than writing formal code. //
While it seems very attractive, unmanaged vibe coding in an enterprise has a couple of significant downsides that must be considered.
Before getting to that, I want to share some flashback from the early days of my career.
Many years ago, in the mid 80s – I was working at a software engineering plant of a major computer manufacturer, on their OLTP middleware.
This was one of their flagship products with a record of high performance.
It was a natural choice – also since there were no other effective alternatives – for many real-time, high performance solutions including lottery, banking etc.
But, over the years, as with all software, the core engine became a little bloated and tended to slow down.
Kludges applied not only added to the bloat, but also introduced some unpredictable behavior – leading to corruption data that was maintained in-memory, to enable high speed processing – in some edge cases.
An inside joke at that time, in the plant, was nobody can beat the speed of the OLTP platform. Whatever it does, it does so fast, that even memory corruption happens so fast that before we detect it, a lot of damage has been caused.
So, it was decided to rewrite the platform – or rather redesign the platform ground up, from first principles of good software architecture.
The focus was on creating a stable, yet flexible system based on the principle of loose coupling between modules and high cohesion within modules.
This was long before we heard of microservices and API driven development.
Flash forward to 2025, I see shades of deja vu, when suddenly vibe coding was supposed the next great tsunami, pushing most of the developer community off their strong base.
Many companies let go of staff, blaming it on, or crediting it to the impact of AI.
Parallelly, the adoption of no-code tools and platforms made the power of developing meaningful solutions accessible to all.
a related deja vu – is when desktops became popular in corporate environments and business users had tools – from spreadsheets that can be programmed with Macros, or simple ‘application’ generators such as FoxPro or Visual Basic.
This led to many personal applications being built and then shared with colleagues and sometimes became common corporate applications.
As long as the persons who created them were available and interested in supporting them, no major issues.
I remember one of my consulting engagements, where me and a colleague had to decode a spreadsheet based model that would ‘fit’ people into a pyramid, to determine the increments and promotions.
Due to some changes in HR policies related to rewards and recognition, the old model needed to be updated!
No documentation – including the password that protected the spreadsheet.
Our only option was to simulate that logic, making some assumptions to reverse engineer and then apply the new policies and forward engineer – again in a spreadsheet!
The above two experiences taught me that
a. when developing IT solutions, it pays to spend some time upfront in visualizing the big picture and have at least some basic architectural guidelines
b. the shadow IT approach makes individuals productive, while being a concern for the corporate IT
All this was before AI based tools entered the scene.
Now, it is easy to prompt a code generation tool to create a solution like above, without having to reverse engineer.
That would be a great productivity boost. It is probably shadow IT on steroids!
But, such an approach, if not managed under some corporate policies and governance guardrails, could easily create significant technical debt for the organization or some runaway liability in terms of operational costs.
Let me elaborate:
The Tech Debt of such solutions need to be viewed in the company context and not just the specific application or program.
As I mentioned earlier, the lack of documentation would result in a higher cost and add uncertainty not only for the functionality, but also the tech environment used to create or deploy those solutions .
How easy is it to find a good FoxPro developer today?
For code generated by a bot or an autonomous agent, would you need the same or compatible agent in the future?
Once the data is localized to these applications, the corporate taxonomy also goes for a toss, increasing the cost of integrating across many such applications and connecting to the corporate context.
So much on the corporate Tech Debt for now.
The runaway liability can be in many forms.
Taking the financial liability, there are many horror stories where companies have received HUGE bills for their AI experiments, as the runtime token consumption was something that they had not anticipated.
The way many of the AI [read LLM] engines charge is based on the number of tokens consumed.
Unless some analysis and optimization is done, for large volumes or repeated processing of even smaller volumes, the tokens consumed could easily ring up big bills.
Other liabilities could include :
- Reputational loss due to slop – which is unmanaged generation without great attention to quality.. Just like our OLTP engine, AI can produce results at a speed that outruns our ability to govern it.
- Wrong business decisions due to hallucination – if the human in the loop accepts every suggestion or output from the model.
The silver lining is that these issues are manageable.
With some guidance for vibe coders and organizational guardrails for ensuring compliance to policies, business users can leverage the power of AI in their roles.
Here are three tips to consider:
- Observability: Make sure the choice of AI models used would cost within the budgets you have.
- Human-in-the-loop: Mandating that “vibe-coded” logic must be peer-reviewed by a traditional engineer before hitting production.
- Documentation Prompting: Use AI to document the very code it just “vibe-created”
Is vibe coding a democratization of innovation, or are we just building a faster lane to technical bankruptcy? I’d love to hear how your team is drawing the line.