AI is evolving. ChatGPT is now limiting legal and medical advice

AI is evolving. ChatGPT is now limiting legal and medical advice

AI už nebude radiť ako lekár či právnik. Aký dopad to má na používateľov, podnikanie a obsahovú stratégiu vo firmách?

AI už nebude radiť ako lekár či právnik. Aký dopad to má na používateľov, podnikanie a obsahovú stratégiu vo firmách?

While most companies are just getting used to AI as a regular work tool, a change is coming that will affect not only users but also the way AI content will function in the future. OpenAI confirms that ChatGPT will no longer respond to legal and medical questions in a way that could be interpreted as expert advice.

This is not by chance, but a step towards stricter regulation — and a signal that the era of “AI advising on everything” is slowly coming to an end.

Why is this happening?

There are several reasons — from legal liability to the risk of so-called AI hallucinations, situations when artificial intelligence concocts an answer with convincing certainty even when it's completely false.

The biggest pressure, however, comes from regulators and lawmakers. AI providing advice on medication or legal actions is no longer seen as a tool but as an “unauthorized service provider” — which is a legal issue for both the developer and the user.

What will change for regular users?

ChatGPT will no longer phrase responses such as:

“Here is the exact dosage of medication you should take”

“According to law 123/2024, you are entitled to...”

Instead, responses will be like:

“This information may relate to medicine/law, please consult an expert.”

ChatGPT will thus become more of a guide rather than an advisor.

What does this mean for businesses, marketing, and content creation?

  1. AI will not be a source of “finished legal or medical texts”

    → those who based blogs or websites on ChatGPT answers will need to change their workflow

  2. Companies will need to combine AI + experts

    → content will need to be verified, supplemented, and cited

  3. SEO will change — Google will favor E-E-A-T (expertise, authority, trustworthiness) even more strictly

    → AI text without an expert will lose its value

  4. The demand for “human review” content will rise

    → marketing will need to be “more human, not just digital” again

And here comes the point: this change strengthens brands that have relied on long-term know-how, not just quick AI generation.

How are we responding at Hiroäs?

We are already combining AI + human editing + expert sources in content creation.

Not due to panic, but because the future of content creation will stand on “Digital, but human.”

Technology helps — but trust is still built by humans.

Conclusion

AI is not changing because it's weak. It is changing because it is too strong — and the world has to react.

Companies that learn to work with the new AI rules sooner will have an advantage.

Those that bet everything on “ChatGPT will write everything for me” may run into trouble.

While most companies are just getting used to AI as a regular work tool, a change is coming that will affect not only users but also the way AI content will function in the future. OpenAI confirms that ChatGPT will no longer respond to legal and medical questions in a way that could be interpreted as expert advice.

This is not by chance, but a step towards stricter regulation — and a signal that the era of “AI advising on everything” is slowly coming to an end.

Why is this happening?

There are several reasons — from legal liability to the risk of so-called AI hallucinations, situations when artificial intelligence concocts an answer with convincing certainty even when it's completely false.

The biggest pressure, however, comes from regulators and lawmakers. AI providing advice on medication or legal actions is no longer seen as a tool but as an “unauthorized service provider” — which is a legal issue for both the developer and the user.

What will change for regular users?

ChatGPT will no longer phrase responses such as:

“Here is the exact dosage of medication you should take”

“According to law 123/2024, you are entitled to...”

Instead, responses will be like:

“This information may relate to medicine/law, please consult an expert.”

ChatGPT will thus become more of a guide rather than an advisor.

What does this mean for businesses, marketing, and content creation?

  1. AI will not be a source of “finished legal or medical texts”

    → those who based blogs or websites on ChatGPT answers will need to change their workflow

  2. Companies will need to combine AI + experts

    → content will need to be verified, supplemented, and cited

  3. SEO will change — Google will favor E-E-A-T (expertise, authority, trustworthiness) even more strictly

    → AI text without an expert will lose its value

  4. The demand for “human review” content will rise

    → marketing will need to be “more human, not just digital” again

And here comes the point: this change strengthens brands that have relied on long-term know-how, not just quick AI generation.

How are we responding at Hiroäs?

We are already combining AI + human editing + expert sources in content creation.

Not due to panic, but because the future of content creation will stand on “Digital, but human.”

Technology helps — but trust is still built by humans.

Conclusion

AI is not changing because it's weak. It is changing because it is too strong — and the world has to react.

Companies that learn to work with the new AI rules sooner will have an advantage.

Those that bet everything on “ChatGPT will write everything for me” may run into trouble.

Latest Articles

Latest Articles

Further insights and creative inspiration.

Further insights and creative inspiration.

We're ready. Are you?

Logo

Billing Information

Contact Information

Bank Details

© 2025 Hiroäs. All rights reserved.

We're ready. Are you?

Logo

Billing Information

Contact Information

Bank Details

© 2025 Hiroäs. All rights reserved.