This presentation conveys two main messages. First, before using large language models (LLMs) for any purpose, it is crucial to be aware of their hidden biases. Second, prompting matters: the way we phrase questions and tasks, as well as the settings we choose, can significantly influence a model’s responses and the “thinking” patterns it activates. Artificial intelligence is not a blank slate—its behavior, much like human behavior, can be shaped and is often unpredictable. The results presented are highly relevant for professionals working with telecommunications networks, especially when AI is applied in automated decision-making or supervisory systems. The talk highlights the potential risks of relying on models that operate on biased patterns within technologically sensitive environments.