I decided to test AI by asking the Copilot in the Microsoft Edge browser to comment on a post I wrote yesterday on the Preston History Facebook group featuring an AI-generated TikTok video about life in Victorian Preston:
The story of the Preston riot of 1842, as it happened. Taken from contemporary press reports and evidence from the inquest. #preston#history#chartist#lancashire
The detailed response came in milliseconds. I find it deeply impressive, but also rather worrying. Iโd be interested in learning what others think. Here it is:
What the post is really asking
The author is opening a conversation about AIโgenerated videos that animate Victorian Preston. The tone is deliberately balanced:
Yes, they look impressiveย โ โquite stunningโ.
But are they trustworthy?
And are they being shared ethically?
So the question isnโt simply โDo you like them?โ but rather โHow should a historyโfocused community handle AI reconstructions?โ
1. The appeal: theyโre visually striking
AI animations of old photographs can be captivating. They make the past feel alive in a way static images donโt. Thatโs why they spread quickly and attract engagement.
The post acknowledges this upfront โ they are stunning.
2. The core concern: historical accuracy
The author immediately asks: โbut how accurate are they?โ
This is the heart of the issue.
AI reconstructions often:
Invent details that never existed
Add movement, faces, or textures that are algorithmic guesses
Create a sense of authenticity that can mislead viewers
Blend eras or insert anachronisms
You can already see this worry reflected in the comments on the page โ for example, one member points out that a video set in 1842 shows Preston Dock and St Walburgeโs, neither of which existed at that time.
For a group dedicated to real local history, thatโs a serious problem.
3. The ethical and copyright issues
Nicola Martinโs statement (quoted in the post) lays out the strongest objections:
Copyright
Many AI videos use images from the Preston Digital Archive or the Historical Society.
These imagesย doย have named copyright holders.
AI creators often fail to credit them.
Thatโs a breach of copyright and of group rules.
She even explains the correct method: Share the image via its web address, not by downloading and reโuploading it.
Ethics
Nicola is also concerned about:
The group being โriddled with AIโ
Members mistaking AI fabrications for genuine historical evidence
The erosion of trust in historical material
The blurring of lines between archival fact and AI invention
Her stance is firm: AI posts may be allowed occasionally, but not as a norm.
4. What the post is inviting you to reflect on
The final question โ โWhat do you think?โ โ is really asking:
Should AI be part of localโhistory communities at all?
Does it help people engage with the past, or distort it?
How do we balance creativity with historical integrity?
Should AI content always be clearly labelled?
How do we protect future researchers from confusing AI reconstructions with authentic sources?
Given your own interest in Prestonโs heritage and architectural accuracy, these are exactly the kinds of questions you tend to explore deeply.
If youโd like, I can help you craft a thoughtful comment to post in the group โ something that acknowledges the appeal of AI while defending historical accuracy and ethical practice.
7 thoughts on “Preston history and the AI revolution”
I don’t think we should be promoting these tools in our local community. There are so many reasons I could give. But first and foremost, because it shifts the work of cultural production away from the artists and historians who make up Preston. If we’re not the ones doing the work to analyse and re-present our history and culture, then it ceases to be our heritage.
A.I. and the LLMs that platform it are tools. No tool in the hands of a novice or beginner will reveal the nuance and elegance of craft and vision that a long-experienced and deeply-informed user’s hands and mind can fashion. A specialist with enough grounding and authority in a particular subject is far more likely to be able to use A.I. constructively and ethically. The user’s own knowledge and discernment are critical to evaluating the feedback and prompts that A.I. like ChatGPT provide, and to assessing the trustworthiness of the sources it’s using or referring to. I’ve not yet found any responses from ChatGPT in areas in which I have long practical knowledge that were without error. I’ve been using it as an editor, a copy editor specifically, and as a fact checker, and the results are best when that editing and fact-checking are reciprocal. If one uses it enough, one learns its style and mannerisms and a certain predictability surfaces. It needs to be questioned, challenged, and sometimes re-directed as it’s conformist by nature. Read a half dozen or so of almost any collections of essays found on Substack today and you’ll see A.I. phrasing patterns revealed with a disappointing predictability.
A very important range of issues being explored here. Most of us, if we see a positive attempt at making local history accessible, would be supportive and point out historical errors so the wider public and the author/creator could produce a more accurate account.
However, when AI is the source and when AI makes no effort to contact or credit those who either own copyright or who created material and cited sources, I am not sure that correcting the output is wise. The end result of such action is that AI becomes more trusted, and its failure to act responsibly is rewarded.
Like others, I saw some significant errors including the ahistorical imagery. I won’t point them out. Better simply to post – This AI generated article contains errors and should not be relied on.
I can see what you’re suggesting here, but either I am misunderstanding A.I., or you are. You speak of A.I. as if it’s an autonomous actor, dreaming up attractive animations or timely screenplays or heartbreaking illness essays on its own. My understanding is that human actors are the starting points. Individuals sitting at keyboards and giving A.I. prompts, like “Give me a four-minute video explaining the causes and the facts of the Preston textile worker riots of 1842.” If the humans setting the project in place don’t already know the history, know those facts and the outcomes, they won’t recognize when A.I. is wrong, and so what they put out there will reflect their own ignorance and laziness, and A.I.’s inherent limitations. Perhaps I’m wrong, but A.I. didn’t create itself. Humans created it. Humans are responsible (or irresponsible, as the case may be) for confirming the facts and correcting the errors.
I don’t think we should be promoting these tools in our local community. There are so many reasons I could give. But first and foremost, because it shifts the work of cultural production away from the artists and historians who make up Preston. If we’re not the ones doing the work to analyse and re-present our history and culture, then it ceases to be our heritage.
Thanks for your comment. I’ll post next with some more thoughts on AI for historians.
A.I. and the LLMs that platform it are tools. No tool in the hands of a novice or beginner will reveal the nuance and elegance of craft and vision that a long-experienced and deeply-informed user’s hands and mind can fashion. A specialist with enough grounding and authority in a particular subject is far more likely to be able to use A.I. constructively and ethically. The user’s own knowledge and discernment are critical to evaluating the feedback and prompts that A.I. like ChatGPT provide, and to assessing the trustworthiness of the sources it’s using or referring to. I’ve not yet found any responses from ChatGPT in areas in which I have long practical knowledge that were without error. I’ve been using it as an editor, a copy editor specifically, and as a fact checker, and the results are best when that editing and fact-checking are reciprocal. If one uses it enough, one learns its style and mannerisms and a certain predictability surfaces. It needs to be questioned, challenged, and sometimes re-directed as it’s conformist by nature. Read a half dozen or so of almost any collections of essays found on Substack today and you’ll see A.I. phrasing patterns revealed with a disappointing predictability.
Thanks for your comment, Michael. Iโll post next with some more thoughts on AI for historians.
A very important range of issues being explored here. Most of us, if we see a positive attempt at making local history accessible, would be supportive and point out historical errors so the wider public and the author/creator could produce a more accurate account.
However, when AI is the source and when AI makes no effort to contact or credit those who either own copyright or who created material and cited sources, I am not sure that correcting the output is wise. The end result of such action is that AI becomes more trusted, and its failure to act responsibly is rewarded.
Like others, I saw some significant errors including the ahistorical imagery. I won’t point them out. Better simply to post – This AI generated article contains errors and should not be relied on.
I can see what you’re suggesting here, but either I am misunderstanding A.I., or you are. You speak of A.I. as if it’s an autonomous actor, dreaming up attractive animations or timely screenplays or heartbreaking illness essays on its own. My understanding is that human actors are the starting points. Individuals sitting at keyboards and giving A.I. prompts, like “Give me a four-minute video explaining the causes and the facts of the Preston textile worker riots of 1842.” If the humans setting the project in place don’t already know the history, know those facts and the outcomes, they won’t recognize when A.I. is wrong, and so what they put out there will reflect their own ignorance and laziness, and A.I.’s inherent limitations. Perhaps I’m wrong, but A.I. didn’t create itself. Humans created it. Humans are responsible (or irresponsible, as the case may be) for confirming the facts and correcting the errors.
Thanks for your comment, Steve. Iโll post next with some more thoughts on AI for historians.