Polish companies remain reluctant to adopt artificial intelligence (AI) as part of a broader strategic framework. In contrast, interest among lower-level employees is growing rapidly — they are often more open to experimenting with and implementing new AI tools than company executives. Many are doing so independently, taking advantage of free or inexpensive AI applications that make their daily work easier. However, bottom-up AI adoption should be monitored and guided to prevent unintended risks.
Employees Experiment Freely — But Verification Is Key
Many users tend to treat answers generated by large language models as unquestionable truth. A major concern is AI hallucination — the generation of inaccurate or entirely fabricated information. Therefore, any AI-generated data, especially when it concerns a company’s operations, should be verified using a second reliable source. While this adds extra work, the time saved by AI in other areas still makes it a worthwhile trade-off.
Why Companies Need an AI Implementation Strategy
This trend demonstrates why it’s essential to develop a structured AI strategy within an organization — particularly when employees are already using AI tools spontaneously. A strategy should not only define the implementation model and employee training, but also establish clear rules and boundaries for AI use.
Such a framework helps prevent common issues associated with unauthorized AI use, such as:
- Copyright infringements,
- Misinformation and reputational risks,
- Exposure of sensitive or confidential data.
It’s also important to remember that entering company-related data into public AI models may indirectly affect brand image, since similar information could resurface when other users ask related questions.
Another recurring issue is the use of poor or awkward machine translations and AI-generated emails that sound artificial or off-brand. Enthusiasm for automation often blinds users to the fact that excessive reliance on AI can lead to content homogenization, making company communications indistinguishable from competitors.
Hence, AI training should go beyond technical proficiency. It must teach employees ethical, critical, and brand-consistent use of AI tools.
AI in Business? Yes — But Set the Rules
Company-wide AI usage policies should clearly define:
- The approved list of AI tools and platforms, preferably under corporate subscriptions.
- Business versions typically do not use company data for further model training.
- They also provide administrator-level access to user activity, improving oversight and data governance.
- An absolute ban on entering personal, financial, contractual, or confidential information into any AI tool.
Another crucial rule should be mandatory human supervision and final approval of any AI-generated content or analysis — especially when these outputs influence key business decisions. This helps prevent situations where management justifies errors with excuses such as “we did it because AI recommended it.”
Establishing such internal regulations is not merely a best practice — it’s also a legal requirement.
Legal Obligations Under the EU AI Act
The European AI Act obliges organizations to ensure their employees have adequate AI-related competencies, including awareness of potential risks.
These risks include data leaks, privacy breaches, and critical decision-making errors caused by overreliance on unverified AI outputs.
Therefore, every organization must introduce internal AI policies and procedures, as it is almost certain that some employees are already using AI tools, whether formally or informally.
Misuse of AI Can Lead to Heavy Fines
AI is now subject to increasingly strict legal regulations — particularly in Europe under the EU AI Act. Organizations must ensure that their use of artificial intelligence complies with the law.
A striking example involves AI-based recruitment or admissions systems that apply discriminatory mechanisms. Under the AI Act, such systems can fall under “prohibited AI practices.” These include:
- Manipulative or subliminal techniques designed to influence emotions or behavior,
- Deceptive content generation,
- Discriminatory scoring or profiling systems.
Violations can result in fines of up to 7% of a company’s global annual turnover or €35 million, whichever is higher.
This underscores that responsible AI adoption is not optional — it is a legal and ethical obligation. Deploying AI without a clear strategy not only risks financial penalties but also threatens a company’s credibility and customer trust.
Author: Marcin Mika, Service Delivery Director at ADP Polska
Source: CEO.com.pl – “Companies Still Lack AI Strategies as Employees Experiment on Their Own”


