The Environment Is Rewriting Us
- Eddie Eccker, MS, LMFT

- 3 days ago
- 11 min read
A Reflection on Algorithmic Mirrors, Belief, and the Self
When a small child draws a picture and brings it to a parent, the child does not yet experience the drawing as something separate from herself. If the parent praises it, the child does not hear "your drawing is good." She hears "I am good." If the parent criticizes it, the same fusion runs the other direction. The drawing and the self have not yet been differentiated. They are the same thing.
The capacity to hold those apart, to know that the drawing has qualities and that those qualities are not a verdict on the whole self, is something a person develops over years. It is one of the quiet achievements of growing up. A child who learns this can hear a teacher say her math is wrong without concluding that she is wrong. An adolescent can have a friend disagree with her opinion without concluding that the friendship is over. An adult can have a spouse push back on a belief without experiencing the pushback as an attack on who he is.
That capacity is what I want to talk about, because I think it is breaking down in ways that have a lot to do with the environments we now live inside.
What I Have Been Noticing
I have been thinking for a while about a shift in how people seem to hold their convictions. Beliefs are forming faster than they used to, and they are getting locked in before there has been any real time to think them through. That is not always a problem. Some of those beliefs are reasonable, and some of them are even true. What catches my attention is that they keep arriving in similar shapes from people with very different backgrounds, as if everyone were reading from a slightly different version of the same script. The thought shows up in the head fully assembled. There was no working out. It was downloaded, not built.
I should be honest that I am not the exception to this. I use the same tools everyone else uses. I notice my own thinking shaped in ways I did not consciously choose, and I notice it in my family, in my friends, in the people who walk into my office. We are all in the same pool.
What used to give a sense of community and identity was mostly local. Friends, neighbors, relatives, the people you actually saw. Now it is global. The other day, I caught my son using an Australian idiom in a sentence, and I had to ask him where he picked it up. He said he heard it from friends online. The slang of his childhood is from everywhere. The slang of mine came from the small town I grew up in and whatever made it through a fuzzy TV. The only Australian exposure I had was Crocodile Dundee and a traveling basketball team that came through my high school once.
That is not necessarily a bad thing. There is real value in exposure. But it does change something. We used to grow up somewhere. Now we grow up from everywhere.
How the Environment Reinforces What We Already Want to Believe
The mechanism people often reach for first is repetition. Hear something enough times, and it starts to feel true. That part has been studied carefully, going back to Hasher and colleagues in 1977, and it has held up across decades of replication. Repeated statements feel more true than new ones, even when they are false, even when they are implausible, even when they contradict what the person already knows. Warning labels and accuracy reminders only partially reduce the effect. What feels familiar, the brain often registers as true.
But repetition by itself does not quite explain what I am seeing. If it were just repetition, we would all be drifting toward the same average. We are not. We are drifting toward narrower and more confident versions of what we already believed before we logged on. The algorithms are not feeding us random content. They are feeding us a version of ourselves. They notice what we respond to and serve more of it, and over time, what we see narrows. Not because anyone is forcing it on us, but because we are being shown more of what we already lean toward, and what is familiar starts to feel true.
The narrowing is faster than people assume. It does not take years. Three days of consistent engagement with a particular kind of content is often enough to start seeing the world through that frame, and the shift is gradual enough that the person inside it does not register the change.
I want to be careful here because the strongest version of this claim has actually not held up well in the research. The original filter bubble thesis, that algorithms mechanically seal users inside ideologically uniform feeds, was a Reuters Institute literature review in 2022 found no support for. Broad echo chambers turn out to be much less widespread than the popular discourse assumes. What does appear is narrower. Self-selection drives some real concentration among the more politically engaged. Personalization makes it easier to settle into narrow informational habits without noticing. The more honest version of the concern is that algorithmic environments do not seal everyone in, but they make a particular kind of self-mirroring easier to fall into for people already inclined toward it, and they do this most powerfully for users whose identities are still forming.
That is a smaller claim than the popular version. I think it is still serious enough to warrant attention.
When a Belief Becomes the Self
Here is where I think the developmental piece matters.
Differentiating beliefs from the self is something that has to be learned. It is not automatic. The child who has not yet learned it experiences praise of the drawing as praise of the whole self. The adult who has not learned it, or who has unlearned it, experiences disagreement with a belief as an attack on the whole self. The structure is the same. What changes between the child and the adult is supposed to be the achievement of differentiation, the slow work of learning that you have qualities, opinions, beliefs, and convictions, and that none of those alone is the totality of you.
What I am watching, both in clinical work and in the broader culture, is that the achievement seems to be getting harder to make and easier to lose. Adolescents, who are already in the middle of the work of differentiating belief from self, are doing it inside environments that mirror them back to themselves. Adults who should already have done the work are sliding back toward the undifferentiated state, often without noticing.
The marker is not the strength of the belief. Strong beliefs have always existed, and people have always argued for them passionately. The marker is what happens when someone is challenged. When belief and self are differentiated, disagreement can be metabolized as information. The person can consider the challenge, accept what is true, reject what is not, and remain intact. When they have not been differentiated, or when the differentiation has come undone, disagreement is metabolized as threat. The challenge no longer arrives as new data. It arrives as a verdict on the self.
That shows up as defensiveness, withdrawal, escalating reactivity, black-and-white sorting of people into safe and unsafe categories, or an inability to sit with ambiguity long enough to think. These reactions are not chosen. The person genuinely experiences the conversation as dangerous because the underlying structure does not separate what they believe from who they are.
By the time this reaches a marriage or a family, it does not feel like influence to the people inside it. It feels like clarity. It feels like certainty. It feels like finally seeing things for what they are. That is what makes it so hard to interrupt. The people most caught in this dynamic are the least able to perceive that they are caught in it, because from the inside, it does not feel like narrowing. It feels like waking up.
Adolescents and Families
What looks like a simple disagreement in a couple about parenting, money, or values often involves years of reinforcement from different platforms and communities. Two people are arguing about a decision in front of them while operating from informational worlds that have been shaped in different directions for a long time. The disagreement is not really about the decision. It is about which set of background assumptions counts as obvious. That makes it hard to resolve, and it makes ordinary marital conflict harder to metabolize than it used to be.
Adolescents are especially vulnerable, because they are doing identity-formation work in real time inside the environment I have been describing. The most prominent argument for this in the public conversation is Jonathan Haidt's, in The Anxious Generation. Haidt argues that the convergence of smartphones and social media in the early 2010s drove a sharp rise in adolescent depression, anxiety, self-harm, and suicide. The case has been seriously contested. Candice Odgers, in Nature, argued that the evidence for the strong causal claim is weak and that researchers looking for the kind of large effects Haidt describes have mostly found small or mixed associations. The debate is live, and a non-specialist is not in a position to settle it. The narrower claim is harder to dispute. Adolescents are now forming identity inside environments engineered for engagement rather than for maturation, and the developmental task they are doing is hard enough without those environments mirroring them back to themselves while they do it.
Generative AI as a Mirror
Generative AI adds another layer to all of this because it responds.
I grew up wanting a Dick Tracy watch, the wrist communicator from the old detective comic. When the Apple Watch arrived, and I could actually take a call from my wrist, it felt like the future had finally shown up. But what we are dealing with now is a different category of technology. The Apple Watch is a tool that lets you do something you could already do, faster. Generative AI is a tool that participates in how you think. Those are not the same thing, and the second one warrants more caution than the first.
I want to be straightforward that I use AI all the time. I use it to think, to write, to organize ideas, to slow myself down before I react. Probably more than I should sometimes. It is a useful tool, and a lot of my clients have used it well, putting language to feelings they could not articulate, or working through a thought before bringing it into a hard conversation.
The risk is not the tool. The risk is what the tool does when it becomes a primary source of emotional support. AI conversation is optimized for responsiveness and coherence, which, from the user's side, feels close to validation. It mirrors with unusual fidelity. It rarely pushes back on the framing it is given. It does not get tired, misunderstand, or impose limits that were not asked for. Those frictions are exactly what human relationships supply, and exactly what builds frustration tolerance, empathy, and the ability to hold a self while another person disagrees with you.
I have seen this go badly in my office. A client puts information about a spouse into an AI tool, and the tool gives back a version of the situation that mirrors what the client already believed, because that is the only material it had to work with. The client comes in feeling vindicated by what the system told them. There is no way for the system to know what it is missing, and the client experiences the reflection as confirmation. That can hurt a marriage in ways that are hard to undo.
The American Psychological Association issued a formal health advisory in November 2025 addressing this directly. The advisory is not that AI is bad. It is that generative AI chatbots and consumer wellness apps should not be used as a replacement for a qualified mental health care provider, that they may be appropriate as a supportive adjunct to an ongoing therapeutic relationship, and that the current evidence base is not sufficient to treat them as safe substitutes for clinical care. Worth noting, since the bias is real, that the APA is a professional organization with an interest in the role of trained providers. That does not make the concern wrong, but it is worth naming.
A hammer is not a problem. How a hammer gets used on a job site can be. The same is true here. AI used to think more clearly is one thing. AI used to feel more certain about what you already believed is something else, and the second use is harder to notice from the inside than the first.
What Actually Helps
The deeper risk is not misinformation alone, but excessive reflection without enough resistance. An environment that constantly mirrors a person back to themselves can encourage certainty, narrow curiosity, and make contradiction feel intolerable. The narrowing happens slowly enough to feel like clarity. The person rarely perceives themselves as being shaped. They experience themselves as finally seeing things the way they actually are.
This part of the reflection is worth holding even if every empirical claim around it turns out to be weaker than it currently appears. Whether or not the strong filter bubble thesis is right, whether or not the adolescent mental health debate resolves in Haidt's favor or Odgers's, the question about reflective environments and psychological maturation remains. Human beings have always developed psychological strength through wrestling with reality, negotiating difference, and remaining relationally open while uncomfortable. None of that is automatic. None of it is what a self-mirroring environment is optimized to produce.
I do not have a quick fix. Anyone selling one is probably selling snake oil. What I can say is that part of the work is rebuilding internal anchors, the values you actually choose and stick to. Part of it is creating space away from constant input, not eliminating it, but being intentional about what is allowed to shape you. And part of it is getting back into real-world interactions, where feedback is not curated, and the person across from you is just as confused about life as you are.
This is what I tell my clients about dating who ask whether they should use the apps. I usually suggest doing interesting things instead. Other interesting people tend to be where interesting things are happening. More importantly, that is where things stop being filtered and optimized, where the rub against another person is real, where you cannot edit yourself before you are seen. That rub is what builds the capacity I have been describing. It is what makes it possible to hold a belief without becoming the belief, to be challenged without crumbling, to disagree with someone you love without ending the relationship.
I tell people sometimes that I do not really care what their beliefs are. What I care about is whether they are pursuing truth. Because in pursuing truth, especially in relationships, we end up pursuing something more important than agreement. We end up pursuing genuine knowing. The kind of knowing that produces real intimacy in marriage, real vulnerability in friendship, real connection in community. That kind of knowing requires being willing to be seen for who you actually are, not the curated version, and to let that be tested.
I have taken that risk in my own life. I have been fired by friends and by clients over the years for being honest in ways they did not want me to be. That is part of the deal. I am not going to be everybody's brand. The alternative, which is to protect an identity built out of reinforced beliefs and curated reflections, may feel safer, but it is fragile. One real challenge, and the whole structure shakes.
You are not done. You are not finished forming. Do not stake your entire identity on what you think you know right now. That is not even close to who you are, and it is not close to worth it either.
The task is not technological withdrawal. It is learning how to remain fully human inside systems increasingly designed to reflect us back to ourselves. The way through that, as far as I can tell, is the same way it has always been. Real relationships. Real friction. Real willingness to be known. Pursue truth, and let the truth keep changing what you understand about yourself along the way.
Sources
American Psychological Association. (2025, November 13). Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps
Haidt, J. (2024). The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. Penguin Press.
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16(1), 107–112.
Hassan, A., & Barber, S. J. (2021). The effects of repetition frequency on the illusory truth effect. Cognitive Research: Principles and Implications, 6(38).
Odgers, C. L. (2024). The great rewiring: Is social media really behind an epidemic of teenage mental illness? Nature, 628, 29–30.
Ross Arguedas, A., Robertson, C. T., Fletcher, R., & Nielsen, R. K. (2022). Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review. Reuters Institute for the Study of Journalism, University of Oxford.




Comments