With all of the discussions taking place regarding the advent and rapid rise of AI-generated words, music and art, it reminded me that, over the past 20-25 years, there have been similar discussions about the tools that have been created during that time to enable artists of varying degrees of capability to both express themselves in ways that they might not have been able to (or, perhaps, thought of) and explore their ideas in a more highly-productive fashion. For example, several artists I know who were originally trained to draw with pen and ink (and who, at first, were hesitant to use these new tools for fear of having their artistry questioned) have told me that computer-based hardware (pens, tablets, 3-D printers, etc.) and software (Photoshop, Illustrator, Maya, Blender and many others) have added many degrees of capability and efficiency to their day-to-day work. Draw an outline, stretch it, color it in, review, erase, substitute another color, etc., all without putting a pen to paper! Even those who consider themselves “purists” have, over time and given access to some of these newfangled tools, admitted that even when they’re committed to producing finished products using traditional methods, they find themselves doing some/all of their “ideation” prior to actually doing the work.
Published December 22, 2022 by Mike Goldstein (or was it?)
While everyone who regularly visits the ACHOF site knows of my passion for research and writing related to the topics of album cover artists and the output of their efforts, what many of you might not know is that I’m an admitted techno-geek. Understanding that that disclosure might lead you to think that I’m inclined to spend most of my time these days lost in the nether regions of technology, I can assure you that most of those behavior traits are now long lost and while I can recall with great accuracy my time spent first as passionate-yet-greatly-underfunded stereophile (someday, we’ll swap stories about building several Dynaco amplifier kits before trading up to a GAS Ampzilla) – before moving into careers in the cartoon animation/CD-ROM authoring/TV & Web content production business – I can say with much pleasure that, these days, checking my emails, doing research on the Web, editing images with Photoshop and then writing articles using MS Word is the extent of my regular relationship to technology (OK, I do use Google Maps when I’m in the car – don’t we all at this point?).
With all that said, I do still try and pay attention to the intersection of technology and entertainment and have followed with great interest the developments of streaming music/TV, blockchain technology/NFTs (as a basis for tracking ownership and/or licensing of digital assets) and, most recently, the capabilities of artificial (general) intelligence systems as they relate to the creation of content – research tools, text-defined images and, most-relevant to my efforts these days, writing assistants. While some early adopters have impressed me with some rather imaginative uses of AI/AGI tools to create newly-rendered versions of their favorite artwork (some of which I reported on earlier, such as one enterprising individual’s use of the “DallE2” AI image-making tool to reimagine some of our favorite album covers), much has been written – particularly in the fields of education and the media – about the upsides and downsides of using these tools, with some suggesting that a great percentage of what we’ll read, hear and see in the future will have been created by authors that simply input a list of requests for articles (images, videos, podcasts) based on a few select criteria and then share the output with the public. Will what we see/hear be “real”, or “factual” or are we simply too busy/lazy to care?