FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayCogDogBlog

CC Licenses and Cacti: It Gets Interesting (or weird) up Close

By: cogdog

I love Creative Commons, ok? I’ve followed, used, the licenses since the start, I have the t-shirts. And like many, I can rattle of the stack of letters and explain them.

But in my internet roaming, especially for my other strong interest, photography, I come across things in practice, well, that are much less cut and dry. Like a few posts ago when I fell into some slimy and weird toy spaces of public domain (not strictly CC, but in the same neighbourhood).

And I also love cacti. They are unworldly, especially to an east coast born suburban kid (well I did taste the Southwest through Roadrunner cartoons), but so intriguing in design and beauty that defies the probably of life with almost no water.

Both seem simple from afar, yet different up close, and occasionally you might get stuck by a sharp spine.

Have I exhausted the un-necessary metaphor? I can’t resist, this was my photo I found my searching my flickr stream on “details”.

On with the show.

It Starts in My Feeds

Old man internet warning- this started while reading my RSS feeds in my folder of Photography sites. I clicked to read the PetaPixel article Generative AI is a Minefield for Copyright Law. Of course it opens with the requisite surreal AI generated image, but frankly does not really give me anything new beyond what I’ve read before– especially from those great CC folks. Bottom line, no one can really say for sure where the clear rules and guidelines will land on generative imagery. It’s messy. Again.

But this is where it got me curious. Down at the credits bottom of the PetaPixel article it reads:

The opinions expressed in this article are solely those of the author. This article was originally published at The Conversation and is being republished under a Creative Commons license.

https://petapixel.com/2023/06/18/generative-ai-is-a-minefield-for-copyright-law/

It is “being republished under a Creative Commons license”. What license is “a”? And where is the link to the license? I am an observer of attribution practice, and this one falls way short of the Creative Commons Best Practices. Okay, that’s just being sloppy. I am no perfectionist.

But I am curious.

I follow the one link to the original article published at The Conversation (I have read many great articles there, good writing happens, I declare). What’s curious here is I can find no mention of a Creative Commons license on the article. There is a footer assertion "Copyright © 2010–2023, Academic Journalism Society" — so I did around for more.

Not that it would ever be clear to look for license details under a link for “Republishing Guidelines” there it is.

We believe in the free flow of information and so publish under a Creative Commons — Attribution/No derivatives license. This means you can republish our articles online or in print for free, provided you follow these guidelines:

https://theconversation.com/ca/republishing-guidelines

The belief in free flow of information is a nice sentiment. And there is is, they are asserting a CC BY-ND license across their publications. One license to rule them all.

Except.

The conditions.

Now this was somewhat new to me, but I heard the smart and esteemed Jonathan Poritz (certified facilitator of the Creative Commons Certificate) say in an online license quibble that adding extra conditions to a CC license… nullifies it (?) That seems to be clear on the response on the CC Wiki to the question “What if I want to add some conditions and I clarify what I mean by a specific term? Is there anything wrong with adding conditions on top of a CC license?” though the details written under License Modification fall into the Ask a Lawyer region.

Back to the conditions on The Conversation’s site- the first three seem to be the scope of the CC BY-ND license: “You can’t edit our material” (that’s ND), “You have to credit authors and their institutions” (that’s attribution), “You have to credit The Conversation and include a link back to either our home page or the article URL” (also mostly standard attribution).

The question to be is the next one:

You must use our page view counter when republishing online. The page view counter is a 1 pixel by 1 pixel invisible image that allows us and our authors to know when and where content is republished. 

https://theconversation.com/ca/republishing-guidelines

Can they really make that a condition of reuse? To deploy a tracking pixel?

That smells a bit weird to me, along with there being no clear indication of the CC ND license directly on articles (hence why PetaPixel does not know what license to declare??).

Okay, this is truly quibbling, but thinking about these details is important, more than just a simple pat acceptance of the basic rules of licensing.

That’s a Weird Kind of CC0 at Rawpixel

For a recently published post I sought an image of a well known brand of candy– it’s not surprising of course that there are not many available- funny in that my google image search filtered for CC licensed results, a high ranking one was my own flickr photo of the spanish language version I spotted in Mexico (and likely that might be a copyright infringement, shhhh).

The one I liked (and used) was pointed from Google to rawpixel. There’s a great image! But zoom in close, and there’s some fishy things happening.

https://www.rawpixel.com/image/3799787/photo-image-vintage-retro-grass

I am very familiar with the iconic roadside Americana photos of John Margolies, readily available public domain content from the Library of Congress.

Rawpixel does declare the image source (not linked) and the CC0 license. All kosher. So far.

But try to download the image- you are required to create an account. Even free, why do I have to sign up for an account to access public domain content (hint, the upsell answer is in the lower right corner). So rawpixel is repackaging public domain content but putting a requirement to download.

I can right control click and download easily (I did) and that trick of hiding images in a .webp file format is no barrier (Preview on OSX now converts it easily to JPEG).

But there’s more. What is that Editorial Use Only link, right below the link to the CCO license?

Content labeled “Editorial Use Only” are for use in news and events-related articles, non-commercial blogs and websites, broadcasts and other non-profit media. This content cannot be used for commercial purposes including advertising, promotions and merchandising.

Editorial content should not be edited or altered substantially from the original image.

rawpixel.com

Now wait a minute- how can Rawpixel put extra conditions on CC0 content? I’d say this is enforceable as wet tissue.

Compare this to the source of this same image at the Library of Congress. No logins required, the images are directly there in usable JPEG format, and there are no extra conditions.

The question is- why does Google give preference in search results to fishy re-packagers of public domain content over the actual source?

We all know the an$wer.

Who Cares?

You should. When we just grab stuff because some web site says its free, us, especially as educators, should be looking at the fine detail. The same is true for the inevitable world changing tsunamic technofad (look closely at the top results, outside of Wikipedia, is there a pattern?).

Again it’s something at a quick glance has a statistically valid appearance of resembling useful information. If you grab and go, because it’s done for you easily, do you understand/question what you got? Can you zoom in and get an understanding of how it works, where it gets its info from? Can you even view source?

Nice pretty cactus there.


Featured Image: My photo! CCO, natch!

2014/365/263 More to a Cactus Than a Bunch of Needles
2014/365/263 More to a Cactus Than a Bunch of Needles flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

Strap on that Jetpack, Fly Over the Dead Bird in the Road

By: cogdog

I was trolling the olde blog settings for the WordPress Jetpack plugin and spotted something that confirmed I had aleady learned– the nuking of Twitter’s API meant the Social module could no longer tweet new posts.

Old news.

But what I did see was a new option to connect my WordPress blog to publish new posts to Mastodon. In like 10 second I connected to my instance and connected my account to this here blog.

Thus, this post is mostly a test of the connection (for which I spent more than too much time in Photoshop on the fetured image).

This might end up double tooting, since I have been using an IFTTT applet to share to mastodon whenever there is a new post in my RSS feed.

Ok, let’s see how this elephant flies with a new Jetpack!

Dead bird dead bird.

I’m no super billionaire businessman, but to cut off the world’s largest web publishing platform from adding content to your product is— dumb ass to the nth degree.

Toot, toot, someone sweep the dead bird off the road.

UPDATE (June 13, 2023)

I have two mastodon post tooters in operation now… for comparison:

On the left is the post made by my IFTTT applet, on the right is the one made by the JetPack Social

The difference is minor- the Jetpack on includes an excerpt of the post. I like in IFTTT I can customize it with extra text like before the title “Just CogDogBlogged” and adding an extra hashtag. I could include the excerpt in IFTTT but am not sure it matters. Also, IFTTT is a bit delayed, the JetPack one is instant. As if I am that important that I don’t want to keep my fan waiting.

I don’t think it means a hill of beans to anyone except me. I like IFTTT more, but I can use JetPack if the former ever poops out.


Featured Image: My own photoppery (OMG the crap DALL-E gave me for “An elephant wearing a jetpack hovers in the air over a dead bird laying on the road, cartoon style”) based on p1210759 flickr photo by generalising shared under a Creative Commons (BY-SA) license and Dead bird flickr photo by indoloony shared under a Creative Commons (BY-NC-ND) license

Humanery and/or Machinery

By: cogdog

Despite the apparent demise of blogs the flat line of the RSS-ograph blips with a pulse from David Kernohan “on chatbots.” FOTA is alive!

Unsure if my comment gets through the gate (a first one generated a critical WordPress error, sorry, David), but I have to at least assert my assertion, as if it blips anywhere in the raging discordant discourse, “Intelligence might be based on pattern recognition as Stephen [Downes] asserts, but it should not be mistaken for intelligence.”

So when David passes a linked reference to the Colossus as the dawn of pattern guessing to decrypt war time messages, my pattern recognition goes to where no GPT can fabricate:

2014/365/125 Just Part of Colossus
2014/365/125 Just Part of Colossus flickr photo by cogdogblog shared under a Creative Commons (BY) license

This photo was taken on my own visit to the National Museum of Computing in Bletchley Park, that being a memorable day when Dave and his partner Viv drove me all the way from Bristol where I visited them to Milton Keynes where I spent a week at the Open University.

Maybe a machine could mine the facts from my blog posts and photos, but it would never make connections, the feelings, to the experience of being there that are not digitized or accessible to wholesale scraping. Never.

Or is this my own flailing effort to raise a pitifully tiny flag of I am Human in front of the advancing, inevitable horde of machines? For an image I could have plopped a prompt into a DALL-EMidJourneyStable Diffusion but why, when I can deploy one of my own making?

2023/365/63 Infinite Clones
2023/365/63 Infinite Clones flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

I could try my best to weave more words around my emerging thought patterns, yes ones that I generate from my own sum of vast experiences. And truly, I could say that I myself, with this nerve network plugged into a 3 pound skull enclosed non-battery powered device, merely have been training 50+ years on written, visual, auditory media,much of which I did not ask explicitly to use, from which I generate through some mystical process, my “own” words? my “own” imagery?

Everything is a Remix but AI Does Not Remix Like an Artist

Who better to turn to than Kirby Ferguson to wisely delve into Artificial Creativity?

Stop, watch the whole thing. I mean the whole damn series. I can only yank quotes

Of all Humanity’s technological advances, artificial intelligence is the most morally ambiguous from inception. it has the potential to create either a Utopia or a dystopia. Which reality will we get? Just like everybody else I do not know what’s coming but it seems likely that in coming decades these visions of our imminent demise will seem campy and naive because our imaginings of the future always become campy and naive.

Everything is a Remix Part 4

He takes AI to “court” on three counts, and makes a point that many don’t want to accept, that harvesting all of the “stuff” readily available is maybe not the point of ethics to hang the purveyors. If you buy into his theme that everything is a remix, that means everything is available, as he has done in his video.

But do not take this as suggesting there is a free ticket to just grab content for the classic “because you can” reason. Follow Kirby Ferguson’s statement about all the media he has remixed into his video:

On some videos about AI the big reveal is that this video was actually made by AI. But this video and this series is the opposite. Nothing has been AI except where I cited AI art. This is entirely human made, the words are all mine but they’re merged from the thoughts of countless people. Everything you’ve seen and heard is from real filmmakers and musicians and game developers and other artists. All these thoughts and all this media were remixed by me into something new and yes I did it all without permission.

Everything is a Remix Part 4

The big difference is that this filmmaker provides credits / attribution to he sources. It is very clear what was used. There is no mask of source content or how it was used hidden behind a facade of a commercial purveyor whose very name has washed open with techno-clorox.

Also, lost in the court section is a very valid question-

Training AIs on individual artists work does seem wrong everyone should be able to opt out of all training sets and maybe AIS should simply not train on images from active art communities. Also some company should make an image generator trained on public domain and licensed images which would avoid this Hornet’s Nest entirely. Somebody please do this.

Everything is a Remix Part 4

Why is there no ethical entity out there creating training from public domain or openly licensed materials? Or why does quote/unquote “OPEN” ai DOT com, which already trains its machines on Wikipedia amongst everything else, just create a version limited to truly open content? About the only thing I found was an image generator on hugging face that looks like it does this, but I am not clever enough to make it do anything.

There is a free idea for anyone to pick up.

Finally, Kirby Ferguson ends with a compelling (to me) assertion of the essence of creativity.

AIs will not be dominating creativity because AIs do not innovate. They synthesize what we already know. AI is derivative by design and inventive by chance. Computers can now create but they are not creative. To be creative you need to have some awareness, some understanding of what you’ve done. AIs know nothing whatsoever about the images and words they generate.

Most crucially, AIs have no comprehension of the essence of art, living, AIs don’t know what it’s like to be a child, to grow up, to fall in love, to fall in lust, to be angry, to fight, to forgive, to be a parent, to age, to lose your parents, to get sick, to face death. This is what human expression is about. Art and creativity are bound to living, to feeling. Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it’s not because of what the AI did it’s because of what a person did.

Art is by humans for humans.

:

Everything is a Remix is a testament to the brilliance and beauty of human creativity. In particular it’s a testament to collective creativity. Human genius is not individual it is shared.

Everything is a Remix Part 4 (emphasis added by me)

Please watch this video! All of them!

Back To The Hammer Hand

Another Old Thing
Another Old Thing flickr photo by cogdogblog shared under a Creative Commons (BY) license

But it’s not as clean as just going John Henry and making an untenable slice of human versus machine. Artificial Intelligence “stuff” is a tool, but it’s not “just a tool.” I am reaching back to something I often rely on from Gardner Campbell’s explanation of Marshall McLuhan

“There is no such thing as “just a tool.” McLuhan wisely notes that tools are not inert things to be used by human beings, but extensions of human capabilities that redefine both the tool and the user. A “tooler” results, or perhaps a “tuser” (pronounced “TOO-zer”). I believe those two words are neologisms but I’ll leave the googling as an exercise for the tuser.

The way I used to explain this is my new media classes was to ask students to imagine a hammer lying on the ground and a person standing above the hammer. The person picks up the hammer. What results? The usual answers are something like “a person with a hammer in his or her hand.” I don’t hold much with the elicit-a-wrong-answer-then-spring-the-right-one-on-them school of “Socratic” instruction, but in this case it was irresistible and I tried to make a game of it so folks would feel excited, not tricked. “No!” I would cry. “The result is a HammerHand!”….

http://www.gardnercampbell.net/blog1/doug-engelbart-transcontextualist/

So no “just a tool,” since a HammerHand is something quite different from a hammer or a hand, or a hammer in a hand. Gardner has given me more directly, in email:

I got to that in part because of McLuhan’s famous dictum “the medium is the message.” Most folks appear to think he meant that the medium shapes the message. If you read the piece in which the phrase appears, however, you can see that’s not what he meant. Instead, McLuhan thought of every medium as a message about what we are and desire as human beings. He said the electric light was a message. Every medium should tell us something meta about itself, and something vital about humanity. A medium is not just a channel for transmitting stuff. A medium is also itself a message, a transmission. Can we understand the medium’s message about itself, and thus about us? That’s why the book is called Understanding Media. What is the message these media convey about themselves? and about mediated experience generally?

So with that, I built on Alan Kay (and I think others as well), who said “we shape our tools, and after that our tools shape us,” bringing in the idea of man-computer symbiosis, putting it all within the context of Engelbart’s integrated domain, and then re-reading McLuhan to find a way to express what I took to be something essential about his ideas of human transformation in the development of mediated experience, and I came out with hammerhand.

Gardner Campbell, personal communication

Much of the educator reaction to ChatGPT (which to me is narrow as there is much more we should be wrapping our heads around), so focused on the fear/worry/change factors rather than ” ideas of human transformation in the development of mediated experience.”

So This Thing Happened Today Which I Defy Anyone To Experience By Typing Prompts Into a Box

Going back to where I started, with David Kernohan’s On Chatbots post, he gives just a short bit at the end to maybe the larger idea of his two, under the heading “A Matter of Semantics”:

I want to close my argument by thinking about the other major strand of artificial intelligence – an associative model that starts (in the modern era) with Vannevar Bush and ends with, well, Google search. The idea of a self-generating set of semantic links – enabling a machine to understand how concepts interrelate – is probably closer to the popular idea of artificial intelligence than toys like ChatGPT.

http://followersoftheapocalyp.se/on-chatbots/

meaning (I think) that the interconnected web of ideas imagined by Bush that influenced Engelbart and actually was instantiated by Tim Berners Lee, is the connectivist idea that the web itself, changing with every new bit linked on to it, offers more potential for making something akin to intelligent than chatbots that are merely regurgitation parts of it in a way that just parody intelligence, not embody it.

So this happened today. It is of no significant to any discussion threading out in the ghosted public square of twitter or the de-aggregated butvibrantcorners of Mastodon, certainly not dead to ne where I will never vention spew pots of (f*** it I cannot call it “Meta” its always Facebook),or the iteration of the America Online is to the real Internet as Linkedin is to ______________… Oh I might have lost my thought completely, as it humanly happens.

Can I get help?

I tried by asking ChatGPT "Help Alan get back on target as he goes off track trying to write a new post for CogDogBlog"

There is a bunch of drivel like "Define the purpose of the post: Alan should begin by asking himself why he wants to write this post. Is it to share information, spark a discussion, or provide a tutorial? Defining the purpose will help him stay focused throughout the writing process.

Create an outline: Once he has identified the purpose of the post, he can create an outline. This will help him organize his thoughts and ideas in a logical manner. He should include a clear introduction, main points, and a conclusion."
This makes this blog sound like some marketing cheese.

This is what people have their undergarments wadded about? Seriously? I push back

I had to ask,"Have you actually read CogDogBlog? It does sound like it." to which the parrot barfs "As an AI language model, I don't read blogs or any other material in the way that humans do. However, I am designed to understand and generate human-like responses based on the vast amount of information that I have been trained on. So, I am familiar with the general concept of CogDogBlog, which is an educational technology blog maintained by Alan Levine, also known as "CogDog." The blog covers a range of topics related to digital storytelling, open education, and creative use of technology in education."
One cannot get more Gurgitating Parroted Trash than this. If you are worried about AI generated text, then maybe look some in the mirror at your own human generated text.

Okay, I am left to my own story making.

Today I sat down to catch up on a few DS106 Daily Creates, it being the very essence of acts of human creativity assisted by tools (using a “TDC HAND”). This was one challenge from a few days ago which in true TDCness, gives a nudge, and opens a door to respond in almost any way.

#tdc4081 #ds106 When randomly generated people from different countries meet each other

You have the power to fabricate people from a mere click! Show us a photo how two of them might meet each other.

First, select two names (from different countries) created by the Names Encyclopedia Random Generator. Then find a photo that might represent them, and add a caption to suggest the context of their meeting.
tdc4081 #ds106 When randomly generated people from different countries meet each other…

Lots of ways to run with this, so I just start with the random names generator that suggests possible names from different countries. Cool! I love random stuff and never saw this one. There’s 12 countries listed, each with 10 names. I just decide to be quick and use the first names in the middle row:

4 random made up names from Romania, Hungrary, Spain, and Sweden.

Before getting to where / how they meet, I decided I need pictures. Before everyone got wrapped up in Generative text posing as intelligence, there was the phase of all the generative adversarial network (GAN) tools making realistic photos of people and thing that do not exist. If you want a real diversion, see This X Does Not Exist. But I went for the one I remember for generating people, thispersondoesnotexist.com but that now seems gone and only goes to some AI outfit.

But I did find a similarly-URL-ed version at https://this-person-does-not-exist.com/ that was interesting,as there are a few more options to choose from (gender,age range, a few ethnicity options, so I generated 4 non-existent people for Ionut, Lázár, Angel, and Elenor. I imported into Photoshop using one of the Panorama collages which spread them out like photos on a table.

Then I tried to think if where to place these non-existent people. I first reached for a new browser window thinking of some sort of technical image, like a computer circuit board. This is when unexpected-ness happened.

You see I use the Library of Congress Free to Use Browser extension that puts a random public domain image in my screen each time I open a new browser tab. I was fully intending to open an image search, but there, but random chance here was my answer, a road sign for Hanks Coffee Shop, even better, because it was from Bensen Arizona, a place I have been before.

Hanks Coffee Shop sign, 4th Street, Benson, Arizona (LOC)
Hanks Coffee Shop sign, 4th Street, Benson, Arizona (LOC) flickr photo by The Library of Congress shared with no copyright restriction (Flickr Commons)

So now it all came together, these people who do not exist, met up for coffee at Hanks in Benson. A bit more Photoshop editing to make a cloud background, superimpose the names of the four, and I was done.

@ds106dc #tdc4081 #ds106 Ionut, Lázár, Angel, and Elenor materialized at https://t.co/DKTTg3iMq5

Through magic randomness of LOC Random image in a new browser tab, met at Hanks Coffee Shop, in Benson, AZ, which *does* exist, see https://t.co/MDHKBoI7h3 pic.twitter.com/WLDN6ucC5M

— Alan Levine (@cogdog) March 19, 2023

“So what?” is certainly a reasonable response. Couldn’t I save time and just type into an image prompt box, “Photos of 4 people displayed under an old time coffee shop sign”? And maybe iterate a few times until it’s “good enough”? Yes, but is making art about the process or the product? Maybe sometimes it is just getting the thing done, turn it in, as they say.

But what is the connection to it? Would an AI remember driving through Benson, AZ on a memorable road trip to camp in the Chiricahua mountains? Would it remember a completely un-related connection from these photos in the Flickr Commons and that there was a call a while ago for examples of galleries of themed images from the commons? And would it then decide, for no productive reason, to search for other Arizona road sign images in the flickr commons, create a gallery, and then share it back?

I’d say, plausibly, eff no. I want to be doing stuff described as “Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it’s not because of what the AI did it’s because of what a person did.”

I’m not saying at all don’t do AI. And I absolutely intrigued by what it might offer, we have hardly even scratched the surface. But it does not always mean we have to just line up as robot servants to the AI Industrial Complex.

If we lose our ability, interest, to be non stochastically human in our tasks, then we end up being “derivative by design and inventive by chance.”

Fin

Never. But I am hoping maybe to see before not too long, another just thinking blip from FOTA.


Featured image: Humanly created and selected, mine

Beware the Machines!
Beware the Machines! flickr photo by cogdogblog shared under a Creative Commons (BY) license

Back to the AI Past: ChatGPT and ELIZA on a Blind Date

By: cogdog

Perhaps there needs to be more romance in the world of Artificial Intelligence? For a while, I had some thoughts back to good old ELIZA. What does she think of all this stuff? Most likely she would not say but deflect back with more questions.

I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Joseph Weizenbaum in computer Power in Human Reason (1976)

What he describes has been called the ELIZA Effect, “the tendency to unconsciously assume computer behaviors are analogous to human behaviors.”

ELIZA, Meet ChatGPT

I solicited Doc Brown to drive the DeLorean back to 1964 but he was unavailable…

You can find many web-based implementations, my favorite is one is E.L.I.Z.A. Talking where you can simulate on a machine of her 1964-1966 era, and even speak to her.

Talk to and hear back from ELIZA https://www.masswerk.at/eliza/

So I had to play an intermediary setting them up on a blind date, shuffling questions and responses between the two of them.

ChatGPT screen with my opening prompt “You have shown up for a blind date with someone you only know as ELIZA. When you meet her, what will you say as an introduction?”

To set the scene, i asked Stable Diffusion Online to create some visuals (I must be terrible at this because none of them were what I had in mind, maybe the first one top left is close?) using a prompt “an older woman and a younger bearded man seated at a restaurant table on a date’ photorealistic.”

Pick your own scene for this blind date:

Four images generated by Stable Diffusion, each as awkward as this date must be.

I had to intervene a few times, but let’s listen in to this blind date:

To create this I just shuttled the responses back and forth between the two interfaces (ChatGPT sometimes needed me to nudge him, he was nervous?)

I turned to another AI service Murf.ai to render the audio. I am not quite sure where the AI comes into this versus the standard text to audio capabilities I’ve used before (the voice from https://www.masswerk.at/eliza/ is much better)

But shrug, if you put AI in the name, who know really what it does?

You should be able to “Murf” it yourself via https://murf.ai/share/lebr6p6e

The Murf.ai interface for rendering the conversation as audio. The voices are Ethan saying my lines, Claire saying ELIZA’s lines, and Ryan doing ChatGPT.

I was not going to pay for the download fee, so I did the lazy route of playing audio from the Murf site and recording in Audacity on a second laptop.

Back to the Present

I did this exchange a week or two ago but it lingered as files. I was inspired a bit last night after Cori and I went to the classic Mae Wilson Theater in Moose Jaw to travel back in movie time to 1985.

Back to 1985 on the Big Screen
Back to 1985 on the Big Screen flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

So this blind date really was some time travel!

It’s also about as much as I feel like doing amongst the tidal wave of fervor over AI… like ten years ago, I turned to mocking a hyped edtech trend, here I start a new trend of mocking.

Again, turning the time machine back to ELIZA’s creator, from the Wikipedia article on the ELIZA Effect

Joseph Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that “there are some acts of thought that ought to be attempted only by humans”.  He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. “Gender stereotypes are instrumentalised to manage our relationship with chatbots” when human behavior is programmed into machines.

https://en.wikipedia.org/wiki/ELIZA_effect

How can we not interact with machines as humans when they are seemingly talking to us? Many of the things people are saying seem to be leaping ahead of the reality that what they are seeing is really approximations of what looks like conversation by [quickly] autocompleting ahead. Of course the references ChatGPT names are not real because it is not doing research, it is just generating something that statistically approximates the appearance of a relevant reference.

This ties to to Maha Bali’s criticism of any reference to starting communicating by “I asked ChatGPT to…” as really an abdication of care not even an automation of it. Because again, what this black box for which we have no intuition or understanding of how it produces words (cue the hypocritical name of the company)– does not automate or even generate care.

It is just generating something that statistically approximates the appearance of care. Humans should know the difference.

My main use of ChatGPT is to… make fun of ChatGPT. Can you help me, Eliza?

Asking ELIZA for help with my ChatGPT obsession (text below)
ELIZA: Please tell me what’s been bothering you.                                
YOU:   I just want to make fun of ChatGPT                                       
ELIZA: Do you say you just want to make fun of chatgpt for some special reason ?
YOU:   Yes, because so many people seem to take it seriously.                   
ELIZA: You are sure.                                                            
YOU:   Yes, what do you think?                                                  
ELIZA: I see.                                                                   
YOU:   Do you?                                                                  
ELIZA: What are your feelings now ?                                             
YOU:   I want to go outside and roll in the snow.                               
ELIZA: What would getting to go outside and roll in the snow mean to you ?   
https://www.masswerk.at/eliza/

Next up on AImocking… who wants to guess?


Featured Image: Made by me combining in Photoshop a screenshot of Talking Eliza, another screen shot of ChatGPT, text generated at Back to the Future Font Meme, and one of the Stable Diffusion online generated images for “an older woman and a younger bearded man seated at a restaurant table on a date’ photorealistic.” All kinds of re-use grayzone here, but if I can assert a license it is CC BY.

Like text style from the movie it reads-- Back to the AI Past-- over two superimposed images of the Talking With Eliza web site and ChatGPT. In the middle is a Stable Diffusion image of an old couple on a blind date

It’s as if MarketingGPT is already in play… pooping on the web

By: cogdog

In Chapter 32 of the “You Cannot Make this**** Up Book of Internet Stupidity” I submit for your disapproval perhaps evidence that GPT AI is in use already for marketing pimple brains.

It was more than 10 years ago I began some relentless debunking of the oft-cited 60000 times faster assertion — “Researchers at 3M have shown that humans process images 60,000 times faster than text.” Even when cited, the source turns out to be a PDF of 3M promotional brochure. I spoke with 3M employees, other visualization experts doing similar research at the time claimed, and there is bubka’s for evidence. With the help of an ace research librarian, I traced it back to a computer magazine opinion columns from 1982 where the trail went cold.

If you ever see this claim question it severely. I am 100% sure it is not supported by any evidence.

I do get sporadic comments and rackpack pings from people who have questioned it, turned to the Google, and found my stuff (insert pat on the back). I feel I have contributed a tiny drop of sanity in demystifying of internet BS.

Until today.

I get a trackpack ping from a post from some consultants I will not give any more google juce by naming, but peek for yourself at Video is Still King!

The human brain processes visuals 60,000 times faster than text (source: 3M Corporation). (VISUAL) Video not only helps you remember, but it helps you connect. People connect to the company, connect to your brand, and tend to trust your brand more when they see your video.

Some Consultants from Mars

Look, they cite 3M Corporation as a source of evidence of the claim…. with a link

The link supporting my claim goes to my blog!

They have asserted a false claim by backing up as a source one of,my blog posts that debunks it!

I could not make this up. I would certainly line up to here these consultants, you should too! I just hope my blog does not alert them with a ping, it would be a shame to lose this specimen of poop on the web.

And I am only able to conjecture that ChatGPT is going to be the marketers dream for spitting out crap copy, not to mention authoring more realistic sounding phishing email attempts.

Thank you internet for putting the turd cherry atop another one of those days.


Featured Image: One of my own, a sort of internet reality distortion modulator…

Incredible Lobby
Incredible Lobby flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

LinkedUn

By: cogdog

I am going to regret this, but 8 years after a tortuous mess of an extraction from (yecch I just hate typing in), well that “professional” social network that rhymes with “StinkedInn” I have been foolish enough to step back in there.

But there was a work thing that really demanded me to see some posts, and because this network only pretends to be of the web, I got weary of all the end arounds it took to get to see more or less something that exists on the web, behind the blue velvet ropes.

I got the info I needed, and already feel wave of regret.

Yes, I know conventional saying is that it’s important to be in there with all the other adults.

I decided I would make my profile the dullest lamest one ever and that it would just sit there in an obscure corner.

A plain as it can be LinkedIn profil page, the icon is a red slash sign through icons of a bike and a dog, my name, location, and a listing that I workfor Open Education Global. The About section reads "I most reluctantly created an account here after deleting one years ago and am going to experiment with creating the most boring and unlinkable profile ever. 

If you want the real thing, see my calling card https://cog.dog or listen to the barking at https://cogdogblog.com/"
I am vying for the lamest LinkedIn profile ever.

Save your time and do not look for me, ok?

I’ve not been following, not returning links, nor clicking on the 3 kajillion emails it sends me, the best ones with subject lines like “Alan, you’re really getting noticed”.

If you have clicked to follow and feel dissed that I have not reciprocated, it’s because I want to be a small thorn inside the algorithm, an outlier that clunks like a rock in the machine works.

Yeah, right, as if it makes a difference.

I am not playing the Linked** game, I will not eat click blue eggs and ham, Sam I am.

UPDATE: Jan 25, 2023

I turned off my public visibility, but people keep finding me.

Stop looking at my profile! And see how stupid an algorithm can be.


Featured Image: 2014/365/57 Ignore All Warnings flickr photo by cogdogblog shared under a Creative Commons (BY) license the sign reading “road closed ahead” modified to read “Dullness Ahead” and in the background is a slightly transparent screen shot of my lamed Linked** profile.

❌