FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

CC Licenses and Cacti: It Gets Interesting (or weird) up Close

By: cogdog

I love Creative Commons, ok? I’ve followed, used, the licenses since the start, I have the t-shirts. And like many, I can rattle of the stack of letters and explain them.

But in my internet roaming, especially for my other strong interest, photography, I come across things in practice, well, that are much less cut and dry. Like a few posts ago when I fell into some slimy and weird toy spaces of public domain (not strictly CC, but in the same neighbourhood).

And I also love cacti. They are unworldly, especially to an east coast born suburban kid (well I did taste the Southwest through Roadrunner cartoons), but so intriguing in design and beauty that defies the probably of life with almost no water.

Both seem simple from afar, yet different up close, and occasionally you might get stuck by a sharp spine.

Have I exhausted the un-necessary metaphor? I can’t resist, this was my photo I found my searching my flickr stream on “details”.

On with the show.

It Starts in My Feeds

Old man internet warning- this started while reading my RSS feeds in my folder of Photography sites. I clicked to read the PetaPixel article Generative AI is a Minefield for Copyright Law. Of course it opens with the requisite surreal AI generated image, but frankly does not really give me anything new beyond what I’ve read before– especially from those great CC folks. Bottom line, no one can really say for sure where the clear rules and guidelines will land on generative imagery. It’s messy. Again.

But this is where it got me curious. Down at the credits bottom of the PetaPixel article it reads:

The opinions expressed in this article are solely those of the author. This article was originally published at The Conversation and is being republished under a Creative Commons license.

https://petapixel.com/2023/06/18/generative-ai-is-a-minefield-for-copyright-law/

It is “being republished under a Creative Commons license”. What license is “a”? And where is the link to the license? I am an observer of attribution practice, and this one falls way short of the Creative Commons Best Practices. Okay, that’s just being sloppy. I am no perfectionist.

But I am curious.

I follow the one link to the original article published at The Conversation (I have read many great articles there, good writing happens, I declare). What’s curious here is I can find no mention of a Creative Commons license on the article. There is a footer assertion "Copyright © 2010–2023, Academic Journalism Society" — so I did around for more.

Not that it would ever be clear to look for license details under a link for “Republishing Guidelines” there it is.

We believe in the free flow of information and so publish under a Creative Commons — Attribution/No derivatives license. This means you can republish our articles online or in print for free, provided you follow these guidelines:

https://theconversation.com/ca/republishing-guidelines

The belief in free flow of information is a nice sentiment. And there is is, they are asserting a CC BY-ND license across their publications. One license to rule them all.

Except.

The conditions.

Now this was somewhat new to me, but I heard the smart and esteemed Jonathan Poritz (certified facilitator of the Creative Commons Certificate) say in an online license quibble that adding extra conditions to a CC license… nullifies it (?) That seems to be clear on the response on the CC Wiki to the question “What if I want to add some conditions and I clarify what I mean by a specific term? Is there anything wrong with adding conditions on top of a CC license?” though the details written under License Modification fall into the Ask a Lawyer region.

Back to the conditions on The Conversation’s site- the first three seem to be the scope of the CC BY-ND license: “You can’t edit our material” (that’s ND), “You have to credit authors and their institutions” (that’s attribution), “You have to credit The Conversation and include a link back to either our home page or the article URL” (also mostly standard attribution).

The question to be is the next one:

You must use our page view counter when republishing online. The page view counter is a 1 pixel by 1 pixel invisible image that allows us and our authors to know when and where content is republished. 

https://theconversation.com/ca/republishing-guidelines

Can they really make that a condition of reuse? To deploy a tracking pixel?

That smells a bit weird to me, along with there being no clear indication of the CC ND license directly on articles (hence why PetaPixel does not know what license to declare??).

Okay, this is truly quibbling, but thinking about these details is important, more than just a simple pat acceptance of the basic rules of licensing.

That’s a Weird Kind of CC0 at Rawpixel

For a recently published post I sought an image of a well known brand of candy– it’s not surprising of course that there are not many available- funny in that my google image search filtered for CC licensed results, a high ranking one was my own flickr photo of the spanish language version I spotted in Mexico (and likely that might be a copyright infringement, shhhh).

The one I liked (and used) was pointed from Google to rawpixel. There’s a great image! But zoom in close, and there’s some fishy things happening.

https://www.rawpixel.com/image/3799787/photo-image-vintage-retro-grass

I am very familiar with the iconic roadside Americana photos of John Margolies, readily available public domain content from the Library of Congress.

Rawpixel does declare the image source (not linked) and the CC0 license. All kosher. So far.

But try to download the image- you are required to create an account. Even free, why do I have to sign up for an account to access public domain content (hint, the upsell answer is in the lower right corner). So rawpixel is repackaging public domain content but putting a requirement to download.

I can right control click and download easily (I did) and that trick of hiding images in a .webp file format is no barrier (Preview on OSX now converts it easily to JPEG).

But there’s more. What is that Editorial Use Only link, right below the link to the CCO license?

Content labeled “Editorial Use Only” are for use in news and events-related articles, non-commercial blogs and websites, broadcasts and other non-profit media. This content cannot be used for commercial purposes including advertising, promotions and merchandising.

Editorial content should not be edited or altered substantially from the original image.

rawpixel.com

Now wait a minute- how can Rawpixel put extra conditions on CC0 content? I’d say this is enforceable as wet tissue.

Compare this to the source of this same image at the Library of Congress. No logins required, the images are directly there in usable JPEG format, and there are no extra conditions.

The question is- why does Google give preference in search results to fishy re-packagers of public domain content over the actual source?

We all know the an$wer.

Who Cares?

You should. When we just grab stuff because some web site says its free, us, especially as educators, should be looking at the fine detail. The same is true for the inevitable world changing tsunamic technofad (look closely at the top results, outside of Wikipedia, is there a pattern?).

Again it’s something at a quick glance has a statistically valid appearance of resembling useful information. If you grab and go, because it’s done for you easily, do you understand/question what you got? Can you zoom in and get an understanding of how it works, where it gets its info from? Can you even view source?

Nice pretty cactus there.


Featured Image: My photo! CCO, natch!

2014/365/263 More to a Cactus Than a Bunch of Needles
2014/365/263 More to a Cactus Than a Bunch of Needles flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

Documenting the Web as Documents

By: cogdog

Oi. My clever blog post title generator is not really jelling this morning (the unArtificial quasiIntelligence needs more coffee).

The VHS tape for #ReclaimOpen 2023 has reached the end spool, and people are dusting their blogs off to reflect on the tri-part questions of the Open Web: How We Got There, Where We Are, and We Could Go. I was not on the ground there and only caught bits on reruns (apparently my generated spawn crashed the scene).

From Jon Udell’s post in Mastodon, I was invigorated by Mo Pelzel’s thoughts on Whence and Whither the Web: Some Thoughts on Reclaim Open, e.g.

…when it comes to appreciating the sheer magic of the hyperlink. To this day I have not lost the sense of wonder about this marvelous invention.

https://morrispelzel.com/uncategorized/whence-and-whither-the-web-some-thoughts-on-reclaim-open/

and teaching me the wonderful concept of  anamnesis, — “refers to ‘making present again,’ or experiencing the meaning of past events as being fully present.”

This circles back to something that has been floating as a write worthy topic, and how delightful it is to upend and bend around what you thinks is right. Let a new tape roll.

1. Web as Documents

Ages ago (months) amongst noticing the drying up of colleagues blog posts in my reader and noticing how many were sharing their content in the various social spaces, I was bit taken back. Many resources I saw being created, activities, collections of things, that I typically would have thought people would publish as good ole durable web pages or something in a blog powered platform– were, arggh, shared as Google Docs.

Docs.

Don’t get me wrong, I love me the use of the shared document. But really, it is the marginal evolution of the Word Processor. I know why people reach for them – it’s easy to use (who wants to WRITE HTML??) (me), it publishes to the web, and its the environment their work places them for large chunks of the day.

Yet, the creation of doc hosted web pages rings of “being on the web but not of the web” (Have you ever done view source on a Google doc?, can you really grasp the content and meaning it’s un-HTML a melange of JavaScript?). Here’s some beef:

  • Those web addresses it creates, like (this is a fake one) https://docs.google.com/document/d/1pGhX4uWZLJYsyo78nAydlZQ10Z8rBT-QutlYZXugly4U/edit?usp=sharing You cannot even foresee what the link leads to from its URL, not its source (e.g. a domain name) nor any kind of file name that suggests its relevant.
  • Is it really durable? Will it be around in 10 years? 3? 2?
  • Where does it fit into a larger work? It’s just another piece of paper hanging out in some Drive. Can the author easily find it (I know the shape of my drive, without search, I’d never find a thing)
  • You have given it to Google, who is notorious for giving and then taking things away. Besides, how are they mining it?

I thought I had more. But when I think of the Open Web as the place of where we “got there”, is a Web of Documents really going to be anything more than a google sized pile of free floating papers, only findable by… its search? Is this just on the web but not very web like in spirit?

Yeah, I did not really have well developed case there, just some disgruntlement and seeing an increasing abandonment of creating web content as the kind of web content I know and love, the kind you can inspect as source and learn something or understand how it is constructed.

Hence the blog post never congealed.

2. The Doc Web

I did a complete turn around on my chewing of sour web games when I stumbled across this piece on The Doc Web, published in some thing called “Lens” (c.f. the web as an infinite space that seems to be boundless), even filed in a section called Escape the AlgorithmRemote corners of the internet—through the eyes of its finest explorers” That speaks to me as a rabbit holer.

This article completely undermined my so called “beef”

?No one would mistake a word processor for the front page of the internet, not unless their computer is nothing more than a typewriter. A hammer is not a portal, and Google Docs, the word processor of our time, is nothing more than a hammer to the nail of language. Right?

Slow down. Google Docs may wear the clothing of a tool, but their affordances teem over, making them so much more. After all, you’re reading this doc right now, and as far as I know I’m not using a typewriter, and you’re not looking over my shoulder. This doc is public, and so are countless others. These public docs are web pages, but only barely — difficult to find, not optimized for shareability, lacking prestige. But they form an impossibly large dark web, a web that is dark not as a result of overt obfuscation but because of a softer approach to publishing. I call this space the “doc web,” and these are its axioms.

https://lensmag.xyz/story/the-doc-web

It’s Axioms knock down my disdain bit by bit. What I saw as a negative in the obfuscation of the web address at foretelling its content, hits on the magic of storytelling, with the element of surprise. An invitation to explore without knowing what’s ahead. And it really range true with the fantastic linked list of examples in Axiom 5, where it shows you the fantastic ways some utterly creative souls have subverted the usual “documentness” of the way 99.9% of use use Google Docs (like ye olde Word Processor) and have created some insanely enjoyable web corners.

Just glance:

Just an image of the linked examples in Axiom 5 of The Doc Web. Aren’t these invitations for a curious mind?

I leave it for you to discover, but these are mind blowing examples of web ingenuity subverting the document concept:

I love this kind of stuff. This shows that despite the age of our algorithmic AI wielding web T-Rex’s, there are all kinds of creative mammal scurrying around in the web underbrush.

I can dig this Web of Docs.

3. It Was Pages All Along

Speaking of the web that was- we always talked about the web as “pages” (skeuomorphing as much as “dialing” a phone) — the construct of them with formatting “tags” is very much taken from the old document producing methods that pre-date the web.

And smack my own head in memories- it very much was the need for “publishing” documents in a shared format got me on the web in 1993. In my work then at the central faculty development office at the Maricopa Community Colleges, I was eager to provide across our large system means for people to yes, share resources, but also, our published journal which had been going out in campus mail on paper.

I was driven then to find digital ways to share so much information I saw in paper. And while we had a system wide shared AppleTalk network for mac users, half of the system was on Windows PCs. Until late 1993, I had been making a lot of effort to make resources available on a Gopher server (a Mac II plugged into the network).

I went through some extraordinary (and laborious) efforts once to publish our journal as a HypeCard stack and convert it with some app to Toolbook (which ran on windows). It worked… but was really ugly to do.

In that time I had come across the early text based World Wide Web (as it had to be said them) browsers, you’d have to enter a number on a command line to follow a hyperlink, and most of what I saw was papers of some physics lab in Switzerland. It was not “clicking” yet.

Then, like many lightning bolts I had, a wise figure intervened. In October 1993 I was visiting Phoenix College for a tech showcase event, and a great colleague named Jim Walters, very wizard like, handing me a floppy disc upon which he had written “MOSAIC”. All he said was, “Hey Alan, you like the internet, try this.”

This was always a powerful lesson- Jim was not trying to techsplain to me or show off his vast experience, he handed me an invitation to explore. He made a judgement call that this might be of interest.

That of course changed everything. That the web was navigable in this first visual web browser my clicking links, and it included images, even crude audio/video, was a mind opener. And then when I came across the NCSA Guide to HTML. I saw that with a simple text editor, I could create rich media content, that could be connected to other places with this magic href tags– and best of all, it was in a format that both Mac and PC computers could navigate the same content.

In about two weeks of getting that floppy disc, I came across software that would let me run a public web server from a Mac SE/30 plugged into an ethernet port on my office, and I was off on this journey.

And the bigger light was, yes, I had some know how to set up a web server, but the fact that web pages crafted in HTML could actually be shared on floppy discs or local media, meant that I could help faculty learn to create their own web media documents, etc, becoming maybe my first somewhat successful web project beyond my institution, Writing HTML.

And that still rings to me, here 30 years after my first web server, that the act of writing the web, not just clicking buttons in an interface, or at least conceptually understanding how the href tag works, is the magic light in all the mix.

The very fact, that through mostly a tactile act of writing a tag, I can create a linked connection from my blog here, to say Mo’s post is completely what the open web was and still is about.

The link. And Writing Links is an act of generosity for both the linkee and the reader.

A web of Documents or the Doc Web? It does not matter, it’s all webbed.


Featured Image:

Taking Notes on Our Conversation
Taking Notes on Our Conversation flickr photo by cogdogblog shared under a Creative Commons (BY) license

Joy! H5P in Web Article… Alas, No Metadata

By: cogdog

I sure miss the days of supporting the H5P Kitchen project — if anything really hits the elements of the olde 5 Rs, to me, it’s the portability, platform independence, downloadability, reusability of H5P plus, the thing fe really love, built in metadata.

So when I spotted a reshare of this University Affairs online article on ChatGPT? We need to talk about LLMs my interest was in the writing — and it is a worthy read about getting beyond the AI inevitability to how we grapple with the murk of ethics.

But here is what jumped out to me in the middle of the article– OMG it’s H5P! I can from a kilometer away that’s what it is, an Interactive Hotspot Diagram.

Typical of H5P, this has a Reuse button (so you could download the .h5p source), an Embed code button (I could have inserted it here in my blog), but ones is missing… The one labeled “rights” which is actually the item’s metadata. You see, there is nothing that identifies the author of this content or how it is licensed — well until I squinted, in the image itself is © REBECCA SWEETMAN 2023. So what we have here are a fraction of the 5Rs.

Metadata, metadata, rarely loved or appreciated beyond librarians, archivists, data nerds. In the H5P Kitchen I wrote a guide to why/how this is used:

If you look at any of the H5P content there, the three bottom buttons are all present. The Rights of Use button not only gives the license for the overall H5P, but provides a place to give attribution to all media used within the H5P. It’s a beautiful thing. Oh here, I will just show you by embedding something.

But I was curious about that LLM Hotspot, and it was 15 seconds of a web search on the title and adding “H5P” that got me to a source, of course, in the eCampusOntario H5P Studio— where we at least see the author credit, but alas, it was shared without specifying a license. Oh, I could have gotten there faster if I inspected the embed code the source is in the URL.

This is minor quibbling of course. I was tickled to see an interactive document in a web article. It’s just so close to making the best use of tools, but as the word “virtual” goes, it’s always “almost there”.


Featured Image:

Almost Where?
Almost Where? flickr photo by cogdogblog shared under a Creative Commons (BY) license

This Blog Now Powered By A.I. Levine

By: cogdog

Google Docs will have it, Microsoft is brewing it, PhotoShop is doing it, so is zoom…it’s high time that the advanced technology of CogDogBlog hop on the artificial intelligence inevitable train.

As demonstrated this week at the Reclaim Open 2023 conference, behold the new generation of blogging powered by AI Levine:

This is all the mad creativity of Michael Branson Smith, who along with Jim Groom (he has a blog, right?) contacted me a few months ago with a request for a short video he could us to train some AI that would generate some manifestation of me as a speaker at the conference.

How could I refuse? Michael is one of he most creative and gracious people I have gotten to know in this ed tech game, initially through DS106, but hey we have spent time together hanging out at his home in Brooklyn.

Michael shared a few prototypes, all of them definitely weird with artifacts, but still sounding like me. Each time he asked if I was okay with it, I said I trust you completely, MBS, do as you see interesting.

I did not travel to Reclaim, but enjoyed the online perch for the conference (I have to say that the Reclaim Hosting set up for live streams is one of the best I have seen. Live streams are viewed adjacent to a Discord chat, with its TV Guide like directory guide at the top.

Michael shared some details of his intent to learn something about AI by doing it himself (DS106 spirit showing). I am forgetting the tools used (it’s in the video and hopefully a future MBS blog post). The original video I sent did not perform well, so MIchael relied on one from the YouTube I recorded in 2019 as an intro to students at a Coventry University class I was doing something with. Meet AI Levine’s puppet:

Michael did say generating the AI audio was the bigger challenge- i believe he used ElevenLabs for text to speech. AI Levine’s blabbering was based on a ChaGPT query to talk about SPLOTs derived from 4 blog URLs MIchael gave it to chew on. The “hallucinating” is readily apparent in the history portion but the gist of it is maybe palpable.

As was obvious to most in the room, the video generated was weirdest with the mouth, and Michael ended up blurring AI Levine a bit in post editing.

In emails the day or two before, Jim Groom suggested some kind of irate outburst from me in chat where they could then bring me in to the room (via a whereby.com link). That would be fun, and of course, anyone ought to get bent out of shape by being deep faked. That’s too easy.

My play was then to act dismissive, to assert that I am actually a supreme AI construct:

For a long time I have been “training” myself by reading the entire internet. Everything I have made, written, created was generated from a complex process applied to this data. SPLOTs could only have been generated this way.

(no idea if this is what I actually said)

All in all it was quite enjoyable, as was seeing on the live stream and the whereby video much different views. All surreal.

Thanks Michael for creating this fallible facsimile in every AI sense of hallucinatory mockery and thanks Reclaim Hosting for putting this on the program.

For me, this again, puts on the table some impressive technical feats to produce, well the equivalent of flavorless mind porridge. Generative content stays within the guardrails of statistic probability yielding an appearance of pseudo meaning. Downtown Dullsville.

What I’ve not seen AI do is to make meaning and connections between the unlikely related. For example, in my regular training on internet content (aka reading blogs), my own neural pathways ran off in different directions in Martin Weller’s fab post Yankee Shed Foxtrot where he weaves also different, seemingly unrelated topics- a book on “shedworking” and a song by Wilco.

Reading this as a human means I do not regurgitate sentences based on some vectors of word probability. For me and my [Un]Artificial [questionable] Intelligence, a series of other un stochastic connections. My comment on Martin’s blog was one of those that really should have been a post here…. so I will steal my ow words:

What you demonstrate in action here, oh ye faithful blogger, is something AI can’t- make connections between ideas and things that are statistically out of range. Yes, we have neural networks in our grey matter, but the computerized ones that borrow the name cannot (well Stephen will assert its eventually possible) to connect this way.

What foir me your post does is trigger other connections. The idea of “shedworking” (the name new but the idea resonates) for some reason reminds me of a book Scott Leslie recommended (see how connections, work, I can remember it was on a camping trip where we canoed to an island in BC) a book called “Shop Class as Soulcraft” http://www.matthewbcrawford.com/new-page-1-1-2 that honors the value of work we do with our hands, it need not be mechanical, or done in a shed, but is a testimonial to the ideal of craftspersonship and what it does for us (I can extend it to creating web pages by hand).

As far as an academic metaphor (you are cornering the market!) I love the idea of conceptually makeing space and time for crafting. And yes, institutions are not providing this per se, but there is a two way play here- us as individuals have some responsibility for also not making the time and place ourselves for doing this. People who design their shedworking places are not just given them by an organizational entity, they take it on themselves. We as individuals have a stake in this.

And then for the music metaphor, I dont know if you are into the Song Exploder podcast https://songexploder.net/ It’s a brilliant concept where musicians break down a finished song into its origin story, how the tracks came together, layer by layer. It’s a fabulous metaphor (to me) of what we do little of in our work (outside thr last ferw bloggers standing) of not just pumping their finished work, but the craft of how it was made, how it emerged, if you will from their audio workshed.

I wanted to launch an idea like “Course Exploder” where educators could break down a lesson or a OER or course design in the same way.

What I enjoy about Sound Exploder is that I get these stories for songs and genres I never listen to (sort of like how university requirements had me take courses I would have never chosen on my own interests). I was just listening to the episode where Seal talked about the evolution of the mega pop hit “Kissed by the Wind” (heard the song plenty but cant say its in my listening queue). It’s rivetting to hear him talk about its origin, but there is a bit in the opening where he talks bout his start before being a mega star:

“I was living in a squat. I didn’t have any money and I was just basically getting my act together, trying to figure out who I was musically. And “Kiss From A Rose” came out of that period, when you sort of do things not for any other purpose than because that’s what’s coming out of you. I had no experience in the studio, in a proper recording studio, at that point.”

It’s that line– “when you sort of do things not for any other purpose than because that’s what’s coming out of you” that connects in my head back to Shopcraft and sheds and why we even bother trying to do stuff.

My comment on Martin Weller’s Yankee Shed Foxtrot blog post

I may be on the far end of the bell curve of what people want from the world these days, but I crave more the intelligence that makes meaning from dissimilar ideas, not just spitting back the statistically relevant blandchastic stuff.

You heard it all from me, the always A.I. Levine.


Featured Image: Combination of one frame from the Reclaim Open 2023 presentation The SPLOT Revolution will be Artificial with a conference logo and a SPLOT logo, call the whole thing CC BY.

Footprints in the Web

By: cogdog

What kinds of web footprints are you leaving? Or does it matter since they just blow away? Where do you choose to do your walking?

I am not talking about your data trails, am talking about the trails you make as a contribution for others.

I know my answers, which are just mine, and are not likely anywhere near yours. But with each day of 2023, the web I walked into in November 1993 with the widest sense of wonder (I wonder about when I last wondered about wonder), is fraying away, or being left behind for the commodified mall of platforms. Ot just left as error messages. The 404 web.

I could go darker, I say to my 3 or 4 readers. But. The Wonder is still there, I need to trust in that, and perhaps just extremely unevenly distributed as the past future used to go.

1.

I don’t know why I reached for numerical headings, but am again borrowing your style, Kate Bowles. You see, like the current inevitable technical overlord, my mind is “trained” on stuff (though training is a narrow word for what I think my grey matter CPU does). All I have read and seen is in me, and then I generate something from it. Who ya callin’ Artificial?

There was an online discuss–well thread? blip? where some others I do “follow” and are friends I have been in the same room together, were talking about a certain aviary named technology dying.

My internal storage database went rummaging around for an article a long time ago I read from a rather prominent writer who had driven an interesting stake into the heart of claims that technology “dies”. I remember they had pulled a random page of tools (like implements) from a 1890s?1900s? Sears Catalog, all would be echnologies one would guess are dead. But the author found somewhere in the world, some artisan was still making them.

I could not for the life of me remember the author’s name. I tried the old oracle of knowledge with searches like “writer who found tools from old catalog still in use” and came up empty, just stuff about library catalogs. A few more failed. Is it the search fail or my weak prompts? Because apparently, all future work will be typing prompts into boxes.

Then I remembered I had likely blogged about it. My blog, my outboard brain! And shazam, my own blog search on old catalog tools still being made hits it as a first result- from Feb 1, 2011 Not One Tech Extinction reconnects my neurons! That was Kevin Kelly, a big shot that back then I had as a guest for an NMC online show I did (those footprints of course are wiped out, as is the recording done in old Adobe Connect).

But I did find what I sought, Kelly’s 2006 blog post on Immortal Technologies:

One of my hypothesis is that species of technology, unlike species in biology, do not go extinct. When I really look at supposed extinct species of technology, I find they still survive in some fashion. A close examination of by-gone technologies shows that somewhere on the planet someone is still producing it. A technique or artifact may be rare in the developed world but quite common in the developing world. For instance, Burma is full of ox-cart technology; basketry is ubiquitous in most of Africa; hand spinning still thriving in Bolivia. A technology may be enthusiastically embraced by a heritage-based minority in modern society, if only for traditional satisfaction. Consider the traditional ways of the Amish, or modern tribal communities. Often old technology is obsolete, that is, it is not very ubiquitous or second rate, but it still may be in small-time use, as many old-fashioned ways are.

http://www.kk.org/thetechnium/archives/2006/02/immortal_techno.php

Yep, these days a blog is “enthusiastically embraced by a heritage-based minority in modern society, if only for traditional satisfaction” its posts in small-time use, left as durable footprints on the web, right there sitting where it was 17 years ago.

2.

Someone’s re-share in Mastodon (oh yes boost), maybe it was Roland Tanglao brought a sad note to see from Boris Mann (who I crossed paths with long ago in the Northern Voice Vancouver days)

Boris’s message marked the passing away of Darren Barefoot, who was the co-founder of Northern Voice. In his last days before cancer closed the lights, or maybe it was ahead of time, Darren’s blog left his last web footprint, a post on his own blog/domain, They Were All Splendid.

I will not even taint it by trying to summarize. Read it yourself. I had some memories of seeing his earlier posts (tweeted maybe by Boris or Roland?) or perhaps in flickr photos of Darren’s Splendid things.

His site lists a long set of footprints, his first web site in 1999, but what I remember, his post describing the idea that lead to a survey that led to the first Northern Voice conference in 2005. I became aware of it of course because Brian Lamb blogged about it (more web footprints still visible), and I think reached out to me as I went to Northern Voice for the first of several times in 2006.

I can’t say I knew Darren, I probably met him, but I was there in that era, when nothing was proven and everything possible for the web. I can say I was there.So many things for me came as an outgrowth of Northern Voice, the connections, friendships, photos.

Web footprints that will be there for while.

2.

Sadly, Darren was not the first Northern Voicer to blog their own last post- I remember being astonished/amazed at the web footprint left behind by Derek Miller in 2011, alas also a victim of cancer.

Northern Voice attracted a bunch of digital photography nerds, running informal sessions where people would gather and share/talk about gear, software, and invariably, go out on the Vancouver streets for a photo walk.

That’s where I met Derek. I cannot remember interactions, but that he was always gracious. The thing that is hard to describe about those Northern Voice conferences, is how there was no prestige hierarchy, it was flat, even though it drew upon people from not often overlapping Venn regions- tech nerds, educators, and social activists.

I remember using Derek’s example photos for How a Camera Works, showing visually how aperture shutter speed affected images.

Speaking of web footprints, I forgot Derek’s penmachine.com domain from one of my own Northern Voice talks in 2011- Looking Through the Lens where I tried to make analogies between the functions/settings for photography and learning.

But looking at that old site (broken links, dead flash embeds), are URLs that spark memories- I always liked using Kris Krug’s story that went behind a photo that went beyond viral on flickr. Kris too was like a rock star photographer, yet treated me as a young tech head and just starting in digital photography, as an equal.

It Was Him
It Was Him flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0). Yes,I am wearing here at the 2008 SXSW conference my Northern Voice 2006 t-ahirt.

I see in my links something stunningly relevant, a post from Kris’s blog:

What we leave behind is our digital footprint (Kris Krug) http://www.kriskrug.com/2011/02/01/what-we-leave-behind-is-our-digital-footprint/

With sad irony, that digital footprint link ends up at a domain for sale sign. Fortunately, ghosts can be summoned from the Wayback Machine.

Our future is being documented by us in our present. Each and everyone of us who has a digital camera, a cellphone, a computer or even a camera phone has the task of creating our living digital history in real time. Our digital landscape has changed drastically from the meaningless dribble that once was in a stream of collective consciousness that is being contributed to by all of us. Collectively everything that we capture is part of our digital footprint that will exist as a living breathing legacy of ourselves online.

…..

The combination of our collective task of documentation and incentive of sharing has joined forces with the thriving Open Source culture. Not only are we inspired to create and then share but we are also infusing the two into spaces, like unconferences and camps, which allow for both situations to transpire. These spaces are open to everyone, sustained by all and owned by none. It only makes perfect sense that our changing interaction with our present state would happen collectively in our own making.

What we leave behind is our digital footprint Kris Krug, Feb 1, 2011

Hello from 2023.

3.

To go back to where this started, mobius strip like, I said “dying” not dead.

I am not contemplating my mountain of web sites as some kind of legacy that matters. Taking care of and preserving my web tracks is not about my last blog post as a goodbye. If anything, it’s perhaps about the first one, and all the ones in between, all of my Pinboard bookmarks (and earlier ones imported from del.icio.us), my flickr photos, the bits and bobs of my archived web sites and ones I have rescued from the dead when others closed shop.

I firmly believe in the web we have woven ourselves (not done by others for us) and the one we care for as individuals. I hate being responsible for breaking any link I have created.

If your followers, likes, and LinkedIn connections are the tracks you care about so be it.

My stuff matters. To me, and I care about that fading dream of the web. Without it, what is there?

4.

There’s always stuff to add after publishing! I wonder if I should comment on my own posts (it helps with the illusion that no one reads me). But sitting in an open tab was Jason Kotke’s marking of his 25th year of leaving footprints

 I realize how it sounds, but I’m going to say it anyway because it’s the truth. When I first clapped eyes on the World Wide Web, I fell in love. Here’s how I described the experience in a 2016 post about Halt and Catch Fire:

When I tell people about the first time I saw the Web, I sheepishly describe it as love at first sight. Logging on that first time, using an early version of NCSA Mosaic with a network login borrowed from my physics advisor, was the only time in my life I have ever seen something so clearly, been sure of anything so completely. It was a like a thunderclap — “the amazing possibility to be able to go anywhere within something that is magnificent and never-ending” — and I just knew this was for me and that it was going to be huge and important. I know how ridiculous this sounds, but the Web is the true love of my life and ever since I’ve been trying to live inside the feeling I had when I first saw it.

https://kottke.org/23/03/kottke-is-25-years-old-today

I too want to be on the web and “live inside the feeling I had when I first saw it” (back when we had to refer to it as the “World Wide Web” and not simply “the web”).


Featured Image: A combination of two of my own photo, which have their own tracks– Steps into Time flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0) and 2016/366/292 The Web is a Tentative Thing flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

A Public Domain Face Only Alamy Could Love

By: cogdog

Ain’t that llama a cutie? What is that smile about?

Ah, it’s how this person (a.k.a..me) can stand the fact that someone is making money off of a photo I took and shared on the internets. Doesn’t a license protect me?

Welcome to my unconventional corner of the Creative Commons tent, already explained in 2016:

So counter to thinking some other flavored Creative Commons license will protect me– I have opted to give my store away. Since I never intended to profit from my photos, how can I lose what Inever “moneytized”? I am fine with people making commercial use of my photos, of taking and using without asking. This has been my ongoing experiment for these seven years, to find out how much I will suffer by putting my 70,000 flickr photos in the public domain.

In fact, I have gotten more in return than money… gratitude and stories.

But What’s With the Llama Face?

One perk of the Flickr pro account is access to Pixsy, a service that can locate much more reliably places on the internet my photos have appeared. This service is set up to aid in “going after” stolen images, bu my use is mainly to just enjoy seeing sites where my photos have gone to. Sometimes I have gone through just to add to my album of photos that have been reused (283 so far). You know, a little self-flattery.

But it also does provide something I have had to swallow with my giveaway choice described above:

That was he first time I discovered that there are “people” out there who scoop up public domain photos, upload to a stock photo outfit like Alamy, and earn a gazzilion (or 20) bucks. I should be OUTRAGED. But then so should be the schmuck who pays $60 for a photo they could get for free from my flickr.

Recently, I looked at my pixsy updates which reported finding 33 of my photos floating round on Alamy, like heck my goofy llama. You can get it free from flickr or pay Alamy £29.99 to use it on a web site.

Who is smiling goofy now?

It’s interesting that entity who added my photo kept my original title (the “2010 365” indicates this was one of my daily flickr photos for 2010). Following this, I can play some search gimmicks and find for sale on Alamy:

I could go on… How do I know these are mine? There is no attribution, but it’s easy… I took ’em. But they are easily found in each of my flickr albums for daily photos.

I just have to wonder too about someone how there having to laboriously download my photos and then upload to Alamy, a job of minimal artificial intelligence.

Shall I Play Alamy?

No I am not changing my public domain tune, But in the interest of being curious how this shady game is played, tonight I created my own Alamy account, and uploaded 3 of my own public domain images as they require for “Quality Control”— can I pass muster with my own images?

Here is my pledge- if anyone is goofy enough to pay Alamy for my public domain photos, any proceeds that pile in will be donated to the local Humane Society.

It’s a public domain face a llama mother could love.


Featured Image: Yours for the taking, sans watermark.

2010/365/2 A Face Only a Llama Mother Could Love
2010/365/2 A Face Only a Llama Mother Could Love flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

Humanery and/or Machinery

By: cogdog

Despite the apparent demise of blogs the flat line of the RSS-ograph blips with a pulse from David Kernohan “on chatbots.” FOTA is alive!

Unsure if my comment gets through the gate (a first one generated a critical WordPress error, sorry, David), but I have to at least assert my assertion, as if it blips anywhere in the raging discordant discourse, “Intelligence might be based on pattern recognition as Stephen [Downes] asserts, but it should not be mistaken for intelligence.”

So when David passes a linked reference to the Colossus as the dawn of pattern guessing to decrypt war time messages, my pattern recognition goes to where no GPT can fabricate:

2014/365/125 Just Part of Colossus
2014/365/125 Just Part of Colossus flickr photo by cogdogblog shared under a Creative Commons (BY) license

This photo was taken on my own visit to the National Museum of Computing in Bletchley Park, that being a memorable day when Dave and his partner Viv drove me all the way from Bristol where I visited them to Milton Keynes where I spent a week at the Open University.

Maybe a machine could mine the facts from my blog posts and photos, but it would never make connections, the feelings, to the experience of being there that are not digitized or accessible to wholesale scraping. Never.

Or is this my own flailing effort to raise a pitifully tiny flag of I am Human in front of the advancing, inevitable horde of machines? For an image I could have plopped a prompt into a DALL-EMidJourneyStable Diffusion but why, when I can deploy one of my own making?

2023/365/63 Infinite Clones
2023/365/63 Infinite Clones flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

I could try my best to weave more words around my emerging thought patterns, yes ones that I generate from my own sum of vast experiences. And truly, I could say that I myself, with this nerve network plugged into a 3 pound skull enclosed non-battery powered device, merely have been training 50+ years on written, visual, auditory media,much of which I did not ask explicitly to use, from which I generate through some mystical process, my “own” words? my “own” imagery?

Everything is a Remix but AI Does Not Remix Like an Artist

Who better to turn to than Kirby Ferguson to wisely delve into Artificial Creativity?

Stop, watch the whole thing. I mean the whole damn series. I can only yank quotes

Of all Humanity’s technological advances, artificial intelligence is the most morally ambiguous from inception. it has the potential to create either a Utopia or a dystopia. Which reality will we get? Just like everybody else I do not know what’s coming but it seems likely that in coming decades these visions of our imminent demise will seem campy and naive because our imaginings of the future always become campy and naive.

Everything is a Remix Part 4

He takes AI to “court” on three counts, and makes a point that many don’t want to accept, that harvesting all of the “stuff” readily available is maybe not the point of ethics to hang the purveyors. If you buy into his theme that everything is a remix, that means everything is available, as he has done in his video.

But do not take this as suggesting there is a free ticket to just grab content for the classic “because you can” reason. Follow Kirby Ferguson’s statement about all the media he has remixed into his video:

On some videos about AI the big reveal is that this video was actually made by AI. But this video and this series is the opposite. Nothing has been AI except where I cited AI art. This is entirely human made, the words are all mine but they’re merged from the thoughts of countless people. Everything you’ve seen and heard is from real filmmakers and musicians and game developers and other artists. All these thoughts and all this media were remixed by me into something new and yes I did it all without permission.

Everything is a Remix Part 4

The big difference is that this filmmaker provides credits / attribution to he sources. It is very clear what was used. There is no mask of source content or how it was used hidden behind a facade of a commercial purveyor whose very name has washed open with techno-clorox.

Also, lost in the court section is a very valid question-

Training AIs on individual artists work does seem wrong everyone should be able to opt out of all training sets and maybe AIS should simply not train on images from active art communities. Also some company should make an image generator trained on public domain and licensed images which would avoid this Hornet’s Nest entirely. Somebody please do this.

Everything is a Remix Part 4

Why is there no ethical entity out there creating training from public domain or openly licensed materials? Or why does quote/unquote “OPEN” ai DOT com, which already trains its machines on Wikipedia amongst everything else, just create a version limited to truly open content? About the only thing I found was an image generator on hugging face that looks like it does this, but I am not clever enough to make it do anything.

There is a free idea for anyone to pick up.

Finally, Kirby Ferguson ends with a compelling (to me) assertion of the essence of creativity.

AIs will not be dominating creativity because AIs do not innovate. They synthesize what we already know. AI is derivative by design and inventive by chance. Computers can now create but they are not creative. To be creative you need to have some awareness, some understanding of what you’ve done. AIs know nothing whatsoever about the images and words they generate.

Most crucially, AIs have no comprehension of the essence of art, living, AIs don’t know what it’s like to be a child, to grow up, to fall in love, to fall in lust, to be angry, to fight, to forgive, to be a parent, to age, to lose your parents, to get sick, to face death. This is what human expression is about. Art and creativity are bound to living, to feeling. Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it’s not because of what the AI did it’s because of what a person did.

Art is by humans for humans.

:

Everything is a Remix is a testament to the brilliance and beauty of human creativity. In particular it’s a testament to collective creativity. Human genius is not individual it is shared.

Everything is a Remix Part 4 (emphasis added by me)

Please watch this video! All of them!

Back To The Hammer Hand

Another Old Thing
Another Old Thing flickr photo by cogdogblog shared under a Creative Commons (BY) license

But it’s not as clean as just going John Henry and making an untenable slice of human versus machine. Artificial Intelligence “stuff” is a tool, but it’s not “just a tool.” I am reaching back to something I often rely on from Gardner Campbell’s explanation of Marshall McLuhan

“There is no such thing as “just a tool.” McLuhan wisely notes that tools are not inert things to be used by human beings, but extensions of human capabilities that redefine both the tool and the user. A “tooler” results, or perhaps a “tuser” (pronounced “TOO-zer”). I believe those two words are neologisms but I’ll leave the googling as an exercise for the tuser.

The way I used to explain this is my new media classes was to ask students to imagine a hammer lying on the ground and a person standing above the hammer. The person picks up the hammer. What results? The usual answers are something like “a person with a hammer in his or her hand.” I don’t hold much with the elicit-a-wrong-answer-then-spring-the-right-one-on-them school of “Socratic” instruction, but in this case it was irresistible and I tried to make a game of it so folks would feel excited, not tricked. “No!” I would cry. “The result is a HammerHand!”….

http://www.gardnercampbell.net/blog1/doug-engelbart-transcontextualist/

So no “just a tool,” since a HammerHand is something quite different from a hammer or a hand, or a hammer in a hand. Gardner has given me more directly, in email:

I got to that in part because of McLuhan’s famous dictum “the medium is the message.” Most folks appear to think he meant that the medium shapes the message. If you read the piece in which the phrase appears, however, you can see that’s not what he meant. Instead, McLuhan thought of every medium as a message about what we are and desire as human beings. He said the electric light was a message. Every medium should tell us something meta about itself, and something vital about humanity. A medium is not just a channel for transmitting stuff. A medium is also itself a message, a transmission. Can we understand the medium’s message about itself, and thus about us? That’s why the book is called Understanding Media. What is the message these media convey about themselves? and about mediated experience generally?

So with that, I built on Alan Kay (and I think others as well), who said “we shape our tools, and after that our tools shape us,” bringing in the idea of man-computer symbiosis, putting it all within the context of Engelbart’s integrated domain, and then re-reading McLuhan to find a way to express what I took to be something essential about his ideas of human transformation in the development of mediated experience, and I came out with hammerhand.

Gardner Campbell, personal communication

Much of the educator reaction to ChatGPT (which to me is narrow as there is much more we should be wrapping our heads around), so focused on the fear/worry/change factors rather than ” ideas of human transformation in the development of mediated experience.”

So This Thing Happened Today Which I Defy Anyone To Experience By Typing Prompts Into a Box

Going back to where I started, with David Kernohan’s On Chatbots post, he gives just a short bit at the end to maybe the larger idea of his two, under the heading “A Matter of Semantics”:

I want to close my argument by thinking about the other major strand of artificial intelligence – an associative model that starts (in the modern era) with Vannevar Bush and ends with, well, Google search. The idea of a self-generating set of semantic links – enabling a machine to understand how concepts interrelate – is probably closer to the popular idea of artificial intelligence than toys like ChatGPT.

http://followersoftheapocalyp.se/on-chatbots/

meaning (I think) that the interconnected web of ideas imagined by Bush that influenced Engelbart and actually was instantiated by Tim Berners Lee, is the connectivist idea that the web itself, changing with every new bit linked on to it, offers more potential for making something akin to intelligent than chatbots that are merely regurgitation parts of it in a way that just parody intelligence, not embody it.

So this happened today. It is of no significant to any discussion threading out in the ghosted public square of twitter or the de-aggregated butvibrantcorners of Mastodon, certainly not dead to ne where I will never vention spew pots of (f*** it I cannot call it “Meta” its always Facebook),or the iteration of the America Online is to the real Internet as Linkedin is to ______________… Oh I might have lost my thought completely, as it humanly happens.

Can I get help?

I tried by asking ChatGPT "Help Alan get back on target as he goes off track trying to write a new post for CogDogBlog"

There is a bunch of drivel like "Define the purpose of the post: Alan should begin by asking himself why he wants to write this post. Is it to share information, spark a discussion, or provide a tutorial? Defining the purpose will help him stay focused throughout the writing process.

Create an outline: Once he has identified the purpose of the post, he can create an outline. This will help him organize his thoughts and ideas in a logical manner. He should include a clear introduction, main points, and a conclusion."
This makes this blog sound like some marketing cheese.

This is what people have their undergarments wadded about? Seriously? I push back

I had to ask,"Have you actually read CogDogBlog? It does sound like it." to which the parrot barfs "As an AI language model, I don't read blogs or any other material in the way that humans do. However, I am designed to understand and generate human-like responses based on the vast amount of information that I have been trained on. So, I am familiar with the general concept of CogDogBlog, which is an educational technology blog maintained by Alan Levine, also known as "CogDog." The blog covers a range of topics related to digital storytelling, open education, and creative use of technology in education."
One cannot get more Gurgitating Parroted Trash than this. If you are worried about AI generated text, then maybe look some in the mirror at your own human generated text.

Okay, I am left to my own story making.

Today I sat down to catch up on a few DS106 Daily Creates, it being the very essence of acts of human creativity assisted by tools (using a “TDC HAND”). This was one challenge from a few days ago which in true TDCness, gives a nudge, and opens a door to respond in almost any way.

#tdc4081 #ds106 When randomly generated people from different countries meet each other

You have the power to fabricate people from a mere click! Show us a photo how two of them might meet each other.

First, select two names (from different countries) created by the Names Encyclopedia Random Generator. Then find a photo that might represent them, and add a caption to suggest the context of their meeting.
tdc4081 #ds106 When randomly generated people from different countries meet each other…

Lots of ways to run with this, so I just start with the random names generator that suggests possible names from different countries. Cool! I love random stuff and never saw this one. There’s 12 countries listed, each with 10 names. I just decide to be quick and use the first names in the middle row:

4 random made up names from Romania, Hungrary, Spain, and Sweden.

Before getting to where / how they meet, I decided I need pictures. Before everyone got wrapped up in Generative text posing as intelligence, there was the phase of all the generative adversarial network (GAN) tools making realistic photos of people and thing that do not exist. If you want a real diversion, see This X Does Not Exist. But I went for the one I remember for generating people, thispersondoesnotexist.com but that now seems gone and only goes to some AI outfit.

But I did find a similarly-URL-ed version at https://this-person-does-not-exist.com/ that was interesting,as there are a few more options to choose from (gender,age range, a few ethnicity options, so I generated 4 non-existent people for Ionut, Lázár, Angel, and Elenor. I imported into Photoshop using one of the Panorama collages which spread them out like photos on a table.

Then I tried to think if where to place these non-existent people. I first reached for a new browser window thinking of some sort of technical image, like a computer circuit board. This is when unexpected-ness happened.

You see I use the Library of Congress Free to Use Browser extension that puts a random public domain image in my screen each time I open a new browser tab. I was fully intending to open an image search, but there, but random chance here was my answer, a road sign for Hanks Coffee Shop, even better, because it was from Bensen Arizona, a place I have been before.

Hanks Coffee Shop sign, 4th Street, Benson, Arizona (LOC)
Hanks Coffee Shop sign, 4th Street, Benson, Arizona (LOC) flickr photo by The Library of Congress shared with no copyright restriction (Flickr Commons)

So now it all came together, these people who do not exist, met up for coffee at Hanks in Benson. A bit more Photoshop editing to make a cloud background, superimpose the names of the four, and I was done.

@ds106dc #tdc4081 #ds106 Ionut, Lázár, Angel, and Elenor materialized at https://t.co/DKTTg3iMq5

Through magic randomness of LOC Random image in a new browser tab, met at Hanks Coffee Shop, in Benson, AZ, which *does* exist, see https://t.co/MDHKBoI7h3 pic.twitter.com/WLDN6ucC5M

— Alan Levine (@cogdog) March 19, 2023

“So what?” is certainly a reasonable response. Couldn’t I save time and just type into an image prompt box, “Photos of 4 people displayed under an old time coffee shop sign”? And maybe iterate a few times until it’s “good enough”? Yes, but is making art about the process or the product? Maybe sometimes it is just getting the thing done, turn it in, as they say.

But what is the connection to it? Would an AI remember driving through Benson, AZ on a memorable road trip to camp in the Chiricahua mountains? Would it remember a completely un-related connection from these photos in the Flickr Commons and that there was a call a while ago for examples of galleries of themed images from the commons? And would it then decide, for no productive reason, to search for other Arizona road sign images in the flickr commons, create a gallery, and then share it back?

I’d say, plausibly, eff no. I want to be doing stuff described as “Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it’s not because of what the AI did it’s because of what a person did.”

I’m not saying at all don’t do AI. And I absolutely intrigued by what it might offer, we have hardly even scratched the surface. But it does not always mean we have to just line up as robot servants to the AI Industrial Complex.

If we lose our ability, interest, to be non stochastically human in our tasks, then we end up being “derivative by design and inventive by chance.”

Fin

Never. But I am hoping maybe to see before not too long, another just thinking blip from FOTA.


Featured image: Humanly created and selected, mine

Beware the Machines!
Beware the Machines! flickr photo by cogdogblog shared under a Creative Commons (BY) license

Back to the AI Past: ChatGPT and ELIZA on a Blind Date

By: cogdog

Perhaps there needs to be more romance in the world of Artificial Intelligence? For a while, I had some thoughts back to good old ELIZA. What does she think of all this stuff? Most likely she would not say but deflect back with more questions.

I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Joseph Weizenbaum in computer Power in Human Reason (1976)

What he describes has been called the ELIZA Effect, “the tendency to unconsciously assume computer behaviors are analogous to human behaviors.”

ELIZA, Meet ChatGPT

I solicited Doc Brown to drive the DeLorean back to 1964 but he was unavailable…

You can find many web-based implementations, my favorite is one is E.L.I.Z.A. Talking where you can simulate on a machine of her 1964-1966 era, and even speak to her.

Talk to and hear back from ELIZA https://www.masswerk.at/eliza/

So I had to play an intermediary setting them up on a blind date, shuffling questions and responses between the two of them.

ChatGPT screen with my opening prompt “You have shown up for a blind date with someone you only know as ELIZA. When you meet her, what will you say as an introduction?”

To set the scene, i asked Stable Diffusion Online to create some visuals (I must be terrible at this because none of them were what I had in mind, maybe the first one top left is close?) using a prompt “an older woman and a younger bearded man seated at a restaurant table on a date’ photorealistic.”

Pick your own scene for this blind date:

Four images generated by Stable Diffusion, each as awkward as this date must be.

I had to intervene a few times, but let’s listen in to this blind date:

To create this I just shuttled the responses back and forth between the two interfaces (ChatGPT sometimes needed me to nudge him, he was nervous?)

I turned to another AI service Murf.ai to render the audio. I am not quite sure where the AI comes into this versus the standard text to audio capabilities I’ve used before (the voice from https://www.masswerk.at/eliza/ is much better)

But shrug, if you put AI in the name, who know really what it does?

You should be able to “Murf” it yourself via https://murf.ai/share/lebr6p6e

The Murf.ai interface for rendering the conversation as audio. The voices are Ethan saying my lines, Claire saying ELIZA’s lines, and Ryan doing ChatGPT.

I was not going to pay for the download fee, so I did the lazy route of playing audio from the Murf site and recording in Audacity on a second laptop.

Back to the Present

I did this exchange a week or two ago but it lingered as files. I was inspired a bit last night after Cori and I went to the classic Mae Wilson Theater in Moose Jaw to travel back in movie time to 1985.

Back to 1985 on the Big Screen
Back to 1985 on the Big Screen flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

So this blind date really was some time travel!

It’s also about as much as I feel like doing amongst the tidal wave of fervor over AI… like ten years ago, I turned to mocking a hyped edtech trend, here I start a new trend of mocking.

Again, turning the time machine back to ELIZA’s creator, from the Wikipedia article on the ELIZA Effect

Joseph Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that “there are some acts of thought that ought to be attempted only by humans”.  He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. “Gender stereotypes are instrumentalised to manage our relationship with chatbots” when human behavior is programmed into machines.

https://en.wikipedia.org/wiki/ELIZA_effect

How can we not interact with machines as humans when they are seemingly talking to us? Many of the things people are saying seem to be leaping ahead of the reality that what they are seeing is really approximations of what looks like conversation by [quickly] autocompleting ahead. Of course the references ChatGPT names are not real because it is not doing research, it is just generating something that statistically approximates the appearance of a relevant reference.

This ties to to Maha Bali’s criticism of any reference to starting communicating by “I asked ChatGPT to…” as really an abdication of care not even an automation of it. Because again, what this black box for which we have no intuition or understanding of how it produces words (cue the hypocritical name of the company)– does not automate or even generate care.

It is just generating something that statistically approximates the appearance of care. Humans should know the difference.

My main use of ChatGPT is to… make fun of ChatGPT. Can you help me, Eliza?

Asking ELIZA for help with my ChatGPT obsession (text below)
ELIZA: Please tell me what’s been bothering you.                                
YOU:   I just want to make fun of ChatGPT                                       
ELIZA: Do you say you just want to make fun of chatgpt for some special reason ?
YOU:   Yes, because so many people seem to take it seriously.                   
ELIZA: You are sure.                                                            
YOU:   Yes, what do you think?                                                  
ELIZA: I see.                                                                   
YOU:   Do you?                                                                  
ELIZA: What are your feelings now ?                                             
YOU:   I want to go outside and roll in the snow.                               
ELIZA: What would getting to go outside and roll in the snow mean to you ?   
https://www.masswerk.at/eliza/

Next up on AImocking… who wants to guess?


Featured Image: Made by me combining in Photoshop a screenshot of Talking Eliza, another screen shot of ChatGPT, text generated at Back to the Future Font Meme, and one of the Stable Diffusion online generated images for “an older woman and a younger bearded man seated at a restaurant table on a date’ photorealistic.” All kinds of re-use grayzone here, but if I can assert a license it is CC BY.

Like text style from the movie it reads-- Back to the AI Past-- over two superimposed images of the Talking With Eliza web site and ChatGPT. In the middle is a Stable Diffusion image of an old couple on a blind date

Collecting TRU Collector SPLOT Stories

By: cogdog

I just can’t stop the SPLOT, which has been into my blog veins since late 2014.

Animation of words on a Stop sign cycling through "Cant Stop the SPLOT"

Speaking of late November, that was when I got a kind email from Daniel Villar Onrubia asking to guest author an article on online infrastructures for open education for the EDUTEC Journal he is co-editing. As he has been familiar with my quote unquote work Daniel did suggest a few topics like Domain of One’s Own, the Networked Narratives course, and yes SPLOTs.

These are things I collaborated with or talked about with Daniel when he was at Coventry University’s Disruptive Media Learning Lab. I have blabbed more than enough on SPLOTs and let the idea simmer for a while… and the pilot light went out.

But after seeing some more of the innovative ways Jessica Motherwell MacFarlane has been recently using TRU Collector at the OpenETC the light flickered on as I got thinking that there was a bit of a story in looking at that one SPLOT, TRU Collector. Because it really started out rather simple in early 2015 as a way to collect sample found images from an open license workshop to having much more going for it in 2023.

I believe there is something key about what was really a practical choice in the beginning to build SPLOTs as themes built on WordPress- I did not have to design a full application, and the improvements and advances in the WordPress core (put aside your feelings about the Block Editor, okay, please?) means my SPLOTs have been able to keep running.

More than that, though, it means that while not required to use the SPLOT, if a site creator has some understanding of the ways WordPress manages information (pages/vs posts, the dual taxonomies, adding functionality with plugins, even changing the default chronological post front door… well you can bend this SPLOT tool even more to your whims.

I was not thinking as much about the technical stuff (though I have a hard time avoiding), but how the ideas for the current version came not from me, but folks like Daniel (who suggested adding the rich text editor for longer writing, and the ways it can be used as a directory of a class or a conference attendees, his Coventry colleague Lauren Heywood who’s idea evolved to be the way the Customize can be used to tailor the submission form to fit the audience it was being used for. And even more recently, Jessica’s method of having students use a secret code as a tag for their work led to making it so certain views can be sorted differently than the default blog reverse chronology.

You see all of these ideas that flowed into the version would never have been there if it was only left to me to dream up.

But there was more, I went back to some stuff I read long ago about Eric von Hipple’s concept of a User Innovation Toolkit (was that idea from Jon Udell?), more from the world of manufacturing:

Toolkits for user innovation is an emerging alternative approach in which manufacturers actually abandon the attempt to understand user needs in detail in favor of transferring need-related aspects of product and service development to users.

von Hippel, Eric; Katz, Ralph (July 2002). “Shifting Innovation to Users via Toolkits”

And more nerves went firing to some things I recall Gardner Campbell framing Marshall McLuhan’s medium/message idea by the idea of that something “just a tool” like a hammer really becomes something different when it is in the hands of a person, it is a new tool, a “Hammerhand”:

So no “just a tool,” since a HammerHand is something quite different from a hammer or a hand, or a hammer in a hand. It’s one of those small but powerful points that can make one see the designed built world, a world full of builders and designers (i.e., human beings), as something much less inert and “external” than it might otherwise appear. It can also make one feel slightly deranged, perhaps usefully so, when one proceeds through the quotidian details (so-called) of a life full of tasks and taskings.

http://www.gardnercampbell.net/blog1/doug-engelbart-transcontextualist/

So now I am more interested in the relationship of a tool/hammer like TRU Collector when it is in the hands of an educator. The best tools are not just a tool, and they are also not mega systems that browbeat our creativity into form box submission (hello LMS), and even worse when they are black box AI labeled things which we cannot even perceive their nature.

And now I need help.

If you have ever used the TRU Collector SPLOT for a class, a project, an organization, or even just to build something for yourself, I am trying to collect examples of them to (a) show the wide variety of creative uses it has but also (b) to pry open what this interplay with tool and educator hand means for our inventiveness. For maybe a post Prompt in the Box era.

Would I go an email a bunch of questions to some folks? Make a form? Set up interviews?

And then I got one of those whacky ideas.

I came up the idea of showing the affordances of this Simplest Possible Learning Online Tool by using TRU Collector itself as the collector of information… woah Neo! And thus, I built in maybe 2 hours… The TRU Collector Collector And yes, there is a SPLOT hammer on the front cover.

I ask you to share any use(s) of TRU Collector.(sorry fans of TRU Writer, I love that SPLOT too, but I am focussing solely on Collector!) I set up the default detail entry with some questions as headers, but whatever you feel like writing is a help for me. And because TRU Collector asks for images, I suggested finding an image of a metaphor for what kind of tool TRU Collector is for you. Fun, eh?

Can you help? Please? Check out my hastily made site and click the share button.

Again, I thought I was back sliding to my old standby SPLOT love, but really some new ideas are bubbling. It is a deeper (maybe) look at what edtech tools ought to be, not what we are foisted upon. Thanks Anne-Marie Scott for making some suggestions and sharing this thought:

Edtech is not a tool. Pedagogy and technology are not mutually exclusive; they are entangled and mutually shaping. This is a fancy way of saying learning and teaching can be bent out of shape by technology and we can bend technology out of shape for learning and teaching purposes.

Anne-Marie Scott https://ammienoot.com/brain-fluff/procurement-aka-the-crack-in-everything-that-lets-the-bullshit-in/ 

Entangled up in SPLOT (is there a Dylan remix happening in the background?)

Again, please SPLOT with me! Got a hammer or a hammerhand?


Featured Image:

Alike / Not Alike / Together
Alike / Not Alike / Together flickr photo by cogdogblog shared under a Creative Commons (BY) license

Not Quite the Round Trip You Were AI-ing For

By: cogdog

Current hyped mythology includes the idea that from the outside of one of the MagicalAI machines you can somehow conjure up what produced it. Or can you loop back?

Promises, promises.

But the ride might be interesting.

As he does frequently, my friend and colleague Grant Potter slides me so many interesting sites and tools, I mostly end up re-sharing them through my Pinboard #cooltech (and posted as #cogdogcooltech to Twitter while it lasts and Mastodon).

Like just today:

Sure I could tag, bag, and boost, but I prefer running these things through the paces.

The premise of Img2Prompt is from an image created spawned spit out by Stable Diffusion it can go in and magically suggest/produce a prompt that can create it. Then you can run it and test— can you make a full round trip?

To give it a try, I went to Stable Diffusion Online (free, no login, can it last?) and started trying to make a suitable image. But instead I reached for one suggested “an insect robot preparing a delicious meal” — thus I get these:

Stable Diffusion Playground with prompt entered "an insect robot preparing a delicious meal". 4 image results below of mechanical like insects, only 2 really look like they are preparing a meal
Yeah, robot insects cooking?

Sure the images are better quality than those quaint craiyon ones, but I don’t find them really great. They have the same degree of … sameness. The ones on the left are not really preparing food, so I chose the top right one, lt looks like a metal insect standing in a bowl of taco mixings.

When uploaded to Img2Prompt, what did I get?

a robot – sized bee, salad. it’s hard to believe the power of a machine – made from it. i think this machine is the real one.

prompt generated from an image, rather chatty, eh?

I cannot say I was impressed, but I wanted to try the full loop, and had it generate a new image from this prompt concocted from the first image.

Im2Prompt with uploaded Stable Diffusion image of a mechanical like insect sanding in a bowl of what looks like salad. 

The predicted prompt "a robot - sized bee, salad. it's hard to believe the power of a machine - made from it. i think this machine is the real one." was then used to generate a different image, more of a  bee hovering behind a dandelion.

Well yes, I got a bee, not robotic, not salad, not real.

Some round trip.

Well the site suggested trying an image from something not generated by Stable D. I have a few, I took this one to toss in the machine:

I Have An Alibi
I Have An Alibi flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

I tossed Felix and the busted toy in Image2Prompt, which yielded… stand back for the wows….

a large dog in the middle of a room, with green slime and some brown stains on the floor, hyper realistic 4k

some kind of prompt?
Img2Prompt tried with uploaded photo f brown and white dog looking innocent standing next to a chewed up green toy.

The predicted prompt was "a large dog in the middle of a room, with green slime and some brown stains on the floor, hyper realistic 4k"

It then generated a photo of a green colored dog in the middle of a wood floor room

I am not sure where green slime came from, but this image seems to have slimed the dog.

At least the wood floor is shiny and has some realistic light sources.

It’s interesting, but…


Featured Image: Historic Columbia River Highway – The Rowena Loop by Brett Hansen placed into the public domain as a US Government work.

The Historic Columbia River Highway loops around beneath the Rowena Crest Viewpoint. Byway travelers descend from the viewpoint to the loops below.

Chad GPT on Research-y Volcanology

By: cogdog

I already took a blog dip on the biggest craze in education since the ditto machine, but as you cannot take a scroll in social media and the news without tripping over 5 takes on ChatGPT… well, I could not help myself.

Well, I could, but I did not.

With all the efforts to take big swings at Chat GPT, I did appreciate seeing Maha Bali’s announcement of a workshop coming up this week (online/free), see her post and info within (sidenote, I have to commend her immensely on that metaphoric image, which led me down another whinging hole as I have noticed that the Pixabay copy/t attribution has no link to the original image) (that’s another post for another day)

https://blog.mahabali.me/educational-technology-2/equity-unbound-workshop-how-well-does-chatgpt-or-any-ai-tool-do-on-our-writing-assignments-how-do-we-adapt-openai/

What is refreshing is that Maha and co-conspirator Anna Mills have planned not another slide deck of info nor a bag full of links, but a real activity and discussion. The rabbit hole grew very large looking at Anna’s examples of essays generated by the gizmo, which are valuable to see her commentary and an approach of fine tuning the interrogation of the AI.

Another piece I liked was they opened a doc early and asked for people to suggest different assignment prompts that potentially could be used as exercises in the workshop.

ChatGPTing Blog Post Titles

I could not help myself but to play with one…er… two tries. The first, is the dated already approach of getting Chat GPT to talk about itself, the prompt was:

Write 10 different titles for a blog post about the conflict between human creative endeavors and algorithmic generated ones. Use metaphors based on the history of machines and inventions. A bit of sarcasm could help.

It look like its been docified to be part of the workshop. I have to say that the results I got were pretty, well, not so great or how I would have gone. The most curious lack of creativity is the repeated format of a blog post title seemingly requiring a colon.

“The Battle of the Titans: Man vs. Machine in the Arts”
“Creative Evolution: Can Algorithms Keep Up with Human Genius?”
“Robo-Artists vs. Human Masters: Who Will Prevail?”
“The Industrial Revolution of Creativity: How Algorithms are Changing the Game”
“The Luddite’s Guide to Algorithm-Generated Art”
“The Machine Age of Creativity: Will Algorithms Replace Human Artists?”
“The Automation of Aesthetics: How Algorithms are Shaping the Future of Art”
“The Algorithmic Uprising: Are Human Creators Becoming Obsolete?”
“The Digital Disruption: How Algorithms are Changing the Art World”
“The Mechanical Muse: How Algorithms are Inspiring and Challenging Human Creativity”

ChatGPT thinks these are good blog post titles. That dude will never write for me.

I did try an push it a bit, with a variation

Now create a list of 10 blog post titles done listicle style.

Read these and you can tell ChatGPT sucks at understanding net culture

“10 Ways Algorithms are Changing the Art World”
“Top 8 Advantages of Algorithm-Generated Art”
“5 Ways Human Creativity and Algorithms Can Coexist”
“7 Examples of Algorithm-Generated Art That Will Blow Your Mind”
“The Pros and Cons of Algorithm-Generated Art: A Listicle”
“9 Artists Who Are Experimenting with Algorithms in Their Work”
“10 Predictions for the Future of Algorithm-Generated Art”
“6 Ways Algorithm-Generated Art is Challenging Traditional Art Forms”
“The Top 3 Controversies Surrounding Algorithm-Generated Art”
“A Beginner’s Guide to Understanding Algorithm-Generated Art: 10 Key Points”

ChatGPT’s attempts at listicle titles.

My other effort was maybe more out of the scope, but I had a curiosity– I asked it a question that I koew should draw significantly from a source, a paper I co-authored in the 1990s when I was a PhD candidate in Geology.

I sort of already wrote this up in a Google doc as I was just taking notes, so am just copying here.

Can ChadGPT Write About Fluid Dynamics of Volcanic Flows?

Yes, I wrote “ChadGPT”– I think of ChatGPT as that know it all character Cliff Clavin from Cheers, guy at the end of a bar who is always spouting off about everything in a way that sounds knowledgeable, but is not.

I even had DALL-E create an image so I can make Chad feel more alive to me.

“ChadGPT” image created by Alan Levine with OpenAI DALL-E 2 from prompt “A know it all man like Cliff Clavin sits at a wooden bar talking like an expert, photographic style” shared under OpenAI terms of use but anyone’s guess on a license (I would use CC0)

I decided to ask ChadGPT questions related to a published topic I know well (or did) a research paper I co-authored and published long ago as a grad student in Geology

Alan H. Levine, Susan W. Kieffer; Hydraulics of the August 7,1980, pyroclastic flow at Mount St. Helens, Washington. Geology 1991;; 19 (11): 1121–1124. doi: https://doi.org/10.1130/0091-7613(1991)019<1121:HOTAPF>2.3.CO;2

I have a full copy of the paper’s contents on my own domain https://cog.dog/pubs/1991/11/15/msh/ 

Okay ChadGPT, show me your stuff!

DOT DOT DOT [insert thinking time and the magic of the response] [no actually I did this before so I refer the interested reader, all 1.5 of them, to my doc with the responses and my refinements].

First of all, if you are looking for the first response from Chat… ChadGPT to be just what you need, then you are not playing well. The game is in the refinement.

The first response sounds reasonable (see response to round 1), but its very generalized, and does not distinguish this particular eruption in 1980 (August) from the big blast in May. It does that circling back thing where it ties every up in a neat bow, because it can. And it included no momentum or flow equations, none at all.

For Round 2, I pushed it for more details:

Rewrite to include a more explicit explanation and representation of the momentum equations. Also, add reference citations.

This version had a few more details (see , and a few equations, but is not really relevant. The two references it gave is a book title that does exist and one that does not. It also did not cite any specific references.

Now in Round 3, I apply more pressure, get more specific, Chad! And you suck at citing references…

You left out a few references for the August 7 1980 eruption, please add more in text citations.

As shown in the comments in the docs, it keeps the same fluid dynamic references (one is fake). The two ones it adds for the requested references for the August 7 eruptions use the title of a relevant report (USGS paper 1250 is pretty famous, I have a copy) but that is a large multi paper report, that includes both the names listed as authors, but their contributions in that volume are not related. So two more bogus references.

Now, finally in round 4, I push to get them closer to the paper I co-authored

Include as well a discussion of the feasibility or problem with applying physics of fluid dynamics to a gaseous medium of a pyroclastic flow using gas dynamics theory. Include a discussion of how such a volcanic flow may have or could not have experienced the equivalent of a hydraulic jump generated by channel instructions or reduction of slope at the base of the volcano.

Chad waxes on but its really just stuff that sounds credible unless you have done this kind of research. What it is describing is not the eruption of August 7. And honestly, a google search gets better reference results, but as we know, ChadGPT knows not of the world wide web, in in place of being relevant, it just tries to appear statistically relevant.

Chad should hook up with Elicit.org and write papers together, at least Ellie some AI yields more relevant resources (and summarizes them).

So What

I’ve somewhat fallen into the trap of trying to go deep with one example, foray. It’s again really hard for the 99.9% of us who are not LLM specialists to grok what is going on inside the box, so we just infer from our various pokes.

And I also lost myself in the pointing out the invalidity of its responses. It has no experience, nor does it even have a stake, or interest in what it spits out.

I have been thinking about a post in Mastodon by Kate Bowles (I’m somewhat hesitating these days to embed toot/posts, I wonder if that is appropriate??)

Following a trail from John Naughton’s article about the fediverse restoring human communication, I read his previous article on ChatGPT, and that took me to Murray Shanahan on human language, and LLM models as predictive sequences of words designed to seem likely.

“Sequences of words with a propositional form are not special to the model itself in the way they are to us. The model itself has no notion of truth or falsehood, properly speaking, because it lacks the means to exercise these concepts in anything like the way we do.”

https://aus.social/@kate/109731104243090193

I could link all those references, (you can find them w/o an AI) (right?) but this leaps out at me: LLM models as predictive sequences of words designed to seem likely.

ChadGPT has nothing to with truth, ideas, getting to an understanding, sensemaking, it is all a fabrication to be statistically knowledgable, to look like an informed source, not to be one.

Or, to go out on a farther limb, as a comment I think I left on Dave Cormier’s blog post with a title that says my lines above with more metaphorical beauty (what ChadGPT fails at repeatedly) — ChatGPT search – Autotune for knowledge. I pondered:

Here is a question: Would you prefer to do the hard work to love and be loved or to just get it easily to have something just looks like love?

Me commenting on Dave’s post

And the whole circus of this is the reactive froth we are in. Be the time educators get their heads even tuned/autotuned into ChatGPT, the 4.o version will be out, as will Google’s play, and all the other outfits scrambling to outdo OpenAI (if someone can do it better and cheaper…).

Who’s ready for Constitutional AI (it is not a right to it!)

With Constitutional AI, we need only a few dozen principles and examples to train less harmful language assistants. With prior techniques, we needed tens of thousands of human feedback labels.

— Anthropic (@AnthropicAI) December 16, 2022

Or heck, see what Charlie The Intern does next!

A few notes:

– Charlie joined us last June as an intern
– He's now full-time, because of things like this
– He's the first person to apply this new technique to a real task
– He had a prototype up 4 days after the paper was published
– We had the feature in production in 2 weeks https://t.co/i0Btm3Arpb

— James Brady (@james_ought) January 11, 2023

It’s interesting times time for sure.


Featured Image: That’s my ChadGPT image used above, “ChadGPT” image created by Alan Levine with OpenAI DALL-E 2 from prompt “A know it all man like Cliff Clavin sits at a wooden bar talking like an expert, photographic style” shared under OpenAI terms of use but anyone’s guess on a license (I would use CC0) superimposed on my own photo maybe Mt Raineer? Clearing the Volcano flickr photo by cogdogblog shared under a Creative Commons (BY) license

Image of a know it all dude arms crossed at a bar atop a background image of a Cascade volcano
ChadGPT: Volcano Research Expert (or a statistical approximation thereof)
❌