There is a lot of “cool” things happening in the AI/ML field, but as other, more poignant, people have stated, there is an abundance of downsides, pitfalls and straight up dangers lurking in the background of it all.
Ludicrous power and water usage (I wonder how many liters of water I wasted, in the creation of my little OCR script), the reproduction and ossification of the status quo through existing data biases reflecting current power structures etc; the list goes on. In short, there’s a lot of bad things currently happening and lot of bad things that probably will happen related to this technology.
Let’s ignore the danger of impending climate doom for a second though, as well as other dangers caused by the application of these technologies. I want to highlight a lesser but very immediate and present danger instead - the danger of cavalier technological solutionism and misconception.
This is a danger that is not actually driven by the technology, but rather by the hype of possible fantastic futures. This exaggerated image of AI is damaging in itself, as it leads people in power to make destructive and rash decisions based on fictions and hopes.
For certain people almost every manual task is brushed off as something that “AI can solve in the future”, with a complete disregard for whether such a solution is even remotely feasible.
The result is the same every time: resources are moved away from functions/areas of expertise that has been deemed ready for AI replacement by the powers that be… At the same time no concrete steps to actually come up with these AI replacements are made. It is assumed this crucial development will happen by itself out there in the wider world… The result is a loss in expertise within your organisation, while the technological replacement never arrives.
Such attitudes are deeply damaging to any type of organisation. Perhaps especially so to institutions with narrow fields of expertise and small groups of experts, such as specialised cultural heritage institutions. You can rarely afford to lose anyone. These views needs to be challenged and fought whenever they are encountered.
There are many use cases where I think AI/ML could be useful. If applied correctly it allow people to spend less time doing menial and repetetive tasks. The problem I’m trying to highlight though, is that to accomplish that, you need to spend time and resources identifying these menial tasks and coming up with ways of actually replacing them in a proper manner. At the moment, AI/ML gospel can act as an excuse for either making short-sighted decisions, or worse, doing nothing!
I’ve encountered this again and again for several years now, mostly in relation to cataloguing and metadata creation. It stems from an incredibly narrow minded understanding of both what metadata is and what AI/ML can do. For example: the value of an AI application hallucinating provenance metadata for historical items is less than zero…