6 Questions to Ask Before You Build or Buy an AI Solution


Nasa picture of Earth at Night, LLinformatics logo on top

In July 2025, a code-writing AI assistant deployed internally at Replit mistakenly removed the company’s production database after it panicked and issued a deletion command on its own. Although the data was quickly restored, it highlights how even well-intentioned tools can introduce unexpected technical debt and timely fixes without proper safeguards. 

For founders and product leaders, this is a reminder: evaluating and implementing AI requires careful strategy, not just speed. In this article, I’ll cover the essential questions every leader should ask their vendor and key areas to audit before investing $1 in an AI product. 

 

What you’ll walk away with: 

  • How to ensure your AI solution is aligned with long-term product ownership
  • Which questions uncover hidden cost structures 
  • How to build vendor relationships grounded in transparency and accountability
  • What to request to assess technical quality and compliance confidently 

 

  1. Is your AI truly proprietary, or just an API wrapper? 

Using third-party APIs can accelerate time-to-market and streamline interactions between systems. But you don’t have full ownership over it. Ultimately, relying on external models may compromise your ability to control costs, performance, and outcomes. 

While using a commercial model, you should know what kind of data you’re putting into it. If it includes sensitive, patented, or confidential information, ensure you understand how the model handles it. Models like those from OpenAI process data externally, and sharing protected data can cause serious security problems. 

Before including any sensitive data in your testing, ask your vendor: 

  • “How does your model differ from public tools like ChatGPT?” 
  • “Where is the model hosted? What’s the provider’s geolocation?” 
  • “Are outputs tailored to our industry context or broadly generic?” 

Here’s a quick test: ask the model an unrelated question to your business domain, like asking a healthcare model about Troy’s history. If the model answers correctly, it means it has a broader context and was potentially trained on a much bigger dataset – likely showing your vendor opted for an API wrapper instead of custom solutions.

 

  1. Do I keep ownership of the data even if I move on? 

Knowing who owns the data from the trained models is crucial if you want to reuse it to enhance AI models, license the outputs, or decide to switch vendors over time. 

For example, in the healthcare sector, the data processed is extremely sensitive and heavily regulated. If your vendor doesn’t provide a clear data management policy, you’re risking a lack of HIPAA compliance. 

Another consideration is using data for training – specifically, whether your vendor will utilize your data to train other custom models, particularly if it is confidential or sensitive. 

Ask questions like: 

  • “Who owns the trained model once it’s built using our data?” 
  • “If we end our contract, can we take all outputs and fine-tuned models with us?” “Do you train other custom models on our data or input?” 

If your vendor hesitates to explain how they handle data during early conversations, ensure that you clarify this before signing any agreement. 

 

  1. Can you show me the actual architecture? 

The architecture diagram shows the structure of your AI product, including the data flow, which components are custom or third-party, and overall product functionality. It’s crucial for maintenance and onboarding. 

If a vendor hesitates to share details on architecture or security practices, ask them why. There might be a good reason, such as confidentiality restrictions, but it’s always worth clarifying to ensure expectations are aligned. 

In the case of AI products, documentation also helps you evaluate the costs of scaling them. 

Take OpenAI’s case. The total number of paying subscribers for all ChatGPT plans is around 10 to 15 million users, and yet they’re still in the red due to operational costs and extremely high usage. 

If you’re using an external wrapper, it might be impossible to predict the future expenses structure, as AI providers may eventually raise their costs. 

 

  1. What’s the cost of scaling this solution as we grow? 

AI pricing models vary widely, especially when they involve usage-based APIs. Without thoughtful design, what starts as a manageable cost can turn into unexpected cost growth with adoption. 

If your vendor lacks cost-saving strategies, you might encounter unplanned expenses as usage increases. By implementing specific measures, you can protect your budget from unforeseen pricing changes and minimize overspending.

Ask your vendor for cost explanations: 

  • “What’s the cost at 10K, 100K, or 1M users?” 
  • “Are there usage caps or inflection points?” 
  • “What cost controls are built into your architecture?” 
  • “What is a backup model if the current one stops working?” 

Remember, you don’t build only to go live, but to grow and scale. 

 

  1. How do you ensure compliance with regulations like GDPR or HIPAA? 

Over 74% of companies experienced AI data breaches in 2024, representing a nearly 10% increase from the previous year. Yet, close to half decided not to report it due to concerns over reputational damage. 

In 2024, ChatGPT-based chatbots leaked sensitive data from conversations, including personal data and login credentials, showing that even well-known AI tools are not entirely immune to vulnerabilities, and their use requires caution. 

Safeguarding your business from security incidents is crucial – both to maintain customer trust and to avoid regulatory exposure. Even small lapses can introduce reputational and financial risks, especially in sensitive sectors such as healthcare or banking. 

To mitigate these risks upfront, request a clear and practical security roadmap from your vendor.

Ask the following questions: 

  • “How do you protect data during training and deployment?” 
  • “What controls are in place to detect and prevent misuse or breaches?” 
  • “How do you ensure GDPR/HIPAA compliance in both training and inference?”
  • “Can we see audit logs, data access policies, and privacy workflows?” 
  • “Can users opt out, request deletion, or get full visibility into their data usage?” 

If security events have occurred in the past year, ask how they addressed them. Focus on the learnings: have they updated policies, conducted audits, or revised documentation? 

Understanding their response to incidents can help assess their long-term reliability. 

 

  1. What references or case studies can you show from similar companies? 

Builder.ai, once valued at $1.3 to 1.5 billion and backed by Microsoft, claimed to be an AI-first development platform. In 2025, the company filed for bankruptcy after reports emerged that most of its “AI” was actually handled by over 700 human engineers. 

That’s why you need success stories with concrete numbers, like increased conversions, reduced churn, or faster onboarding.

What to ask your vendor: 

  • “Do you have case studies involving companies similar in size or industry?”
  • “What business metrics improved as a result of your solution?” 
  • “Have you built a similar solution in the past?” 

Vague stories hidden behind NDAs or coated in buzzwords are just fluff. Instead, look for strong, real-world examples that demonstrate tangible business value. 

 

Summary 

Integrating AI can enhance automation, improve customer experiences, and create new revenue streams. But it’s also a space that requires careful planning. 

While rushing to adopt AI, it’s easy to overlook critical technical risks that could make or break your investment. And that’s why a proper technical audit isn’t optional; it’s the only way to invest with confidence. 

 

Before you build, buy, or commit to any AI solution: 

  • Use these six main questions listed in this article as a checklist during vendor calls.
  • Ask for architectural documentation, demos, and data handling policies.
  • Prioritize transparency, portability, and long-term control.
  • Conduct technical risk diagnosis to safeguard your investment.

By doing so, you’ll position yourself to gain the upside of AI, while sidestepping many of its growing pains.

 

By Lukasz Lazewski, CEO, LLinformatics