FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The Fugitive Heiress Next Door

In a decrepit house in São Paulo lives a woman who many people call a bruxa (the witch). As a blockbuster Brazilian podcast recently revealed, Margarida Maria Vicente de Azevedo Bonetti is wanted by U.S. authorities for her treatment of a maid named Hilda Rosa dos Santos, whom Margarida and her husband more or less enslaved in the Washington, D.C. area:

In early 1998—19 years after moving to the United States—dos Santos left the Bonettis, aided by a neighbor she’d befriended, Vicki Schneider. Schneider and others helped arrange for dos Santos to stay in a secret location, according to testimony Schneider later gave in court. (Schneider declined to be interviewed for this story.) The FBI and the Montgomery County adult services agency began a months-long investigation.

When social worker Annette Kerr arrived at the Bonetti home in April 1998—shortly after dos Santos had moved—she was stunned. She’d handled tough cases before, but this was different. Dos Santos lived in a chilly basement with a large hole in the floor covered by plywood. There was no toilet, Kerr, now retired, said in a recent interview, pausing often to regain her composure, tears welling in her eyes. (Renê Bonetti later acknowledged in court testimony that dos Santos lived in the basement, as well as confirmed that it had no toilet or shower and had a hole in the floor covered with plywood. He told jurors that dos Santos could have used an upstairs shower but chose not to do so.)

Dos Santos bathed using a metal tub that she would fill with water she hauled downstairs in a bucket from an upper floor, Kerr said, flipping through personal notes that she has kept all these years. Dos Santos slept on a cot with a thin mattress she supplemented with a discarded mat she’d scavenged in the woods. An upstairs refrigerator was locked so she could not open it.

“I couldn’t believe that would take place in the United States,” Kerr said.

During Kerr’s investigation, dos Santos recounted regular beatings she’d received from Margarida Bonetti, including being punched and slapped and having clumps of her hair pulled out and fingernails dug into her skin. She talked about hot soup being thrown in her face. Kerr learned that dos Santos had suffered a cut on her leg while cleaning up broken glass that was left untreated so long it festered and emitted a putrid smell.

She’d also lived for years with a tumor so large that doctors would later describe it variously as the size of a cantaloupe or a basketball. It turned out to be noncancerous.

She’d had “no voice” her whole life, Kerr concluded, “no rights.” Traumatized by her circumstances, dos Santos was “extremely passive” and “fearful,” Kerr said. Kerr had no doubt she was telling the truth. She was too timid to lie. 

The future of scholarly podcasting can still be whatever we want it to be

By: Taster
From esoteric passion projects to mainstream talk shows, academic podcasting, like the medium as a whole, has grown considerably over the past decade. Drawing on interviews with all kinds of academic podcasters as part of his new book, Ian M. Cook argues the future of the academic podcast is still undecided and that it continues … Continued

One More Descript Thing

By: cogdog

People still read blogs. Well, maybe a few of them. I was happy to see others get intrigued and interested in my sharing of the ways Descript had really revolutionized my way of creating podcast audio.

More than likes and reposts there’s not much more positive effect when you can capture Jon Udell’s interest as it happened in Mastodon and as he shared (aka blogged) about an IT Conversations episode he re-published.

And as it often happens, Jon’s example showed me a portion of a software that I was unaware of. This was as I remember one of the most evident aspects I found in the 1990s when I started using this software called PhotoShop- each little bit I learned made me realize how little of it’s total potential I did not know, like it was infinite software.

You see, I made use of Descript to much more efficiently edit my OEG Voices podcasts – but my flow was exporting audio and posting to my WordPress powered site. Jon’s post pointed to an interesting aspect when audio was published to a Descript.com sharable link.

Start with my most recent episode, published to our site, with audio embedded, a link to the transcript Descript creates.

If you access the episode via the shared link to Descript, when you click the play button in the lower left, the transcript highlights each word, in a kind of read along fashion. That’s nifty, because you might want to stop to perhaps copy a sentence, or look something up.

Descript audio playback where the transcript shows the text of the audio being played back.

Even more interestingly, you can highlight a portion of text, use a contextual menu, and provide a direct link to that portion of audio. Woah. Try this link to hear/read Sarah’s intro from the screenshot above.

Yes, Descript provides addressable links to portions of audio (note, I have found that Descript is not jumping down to the location, maybe that’s my set up, I did post a request in their Discord bug report).

But wait there’s more. You can also add comments (perhaps annotation style) to portions of he transcript/audio.

You do have to create an account to comment, so you might not appreciate that. It looks like it’s more aimed at comments for production notes, but why cannot it be more annotation like?

Anyhow, this was nifty to discover, and I would not have known this, had not Jon shared his own efforts with a link.

This is how the web works, well my web works this way. And refreshing to explore some technology and not with the din of AI doomsday or salvation day reverb (although there is a use of AI for Descript in transcription, but it’s at a functional use level, not a shove it your face level).

I am confident as always there is more here that I do not know with Descript than what I do know (I need learn the Over Dub tool).


Featured Image: There’s always that one thing…

Curly's Law
Curly’s Law flickr photo by cogdogblog shared under a Creative Commons (BY) license

Changing Up, “Decripting” My Podcast Methods, Eh, Ai? Eh?

By: cogdog

You know you’ve been around this game a grey haired time if you remember that podcasting had something to do with this thing called RSS. I found shreds of workshops I did back at Maricopa in 2006 “Podcasting, Schmodcasting…. What’s All the Hype?” and smiled I was using this web audio tool called Odeo who’s founder went on to lay a few technical bird droppings.

I digress.

This post is about a radical change in my technical tool kit, relearning what I was pretty damned comfortable doing, and to a medium degree, appreciating for a refreshing change, something that Artificial Intelligence probably has a hand in. Not magically transforming, but helping.

I’ve had this post in my brain draft for a while, but there is a timely nature, since this coming Friday I am hosting for OE Global a new series I have been getting off the grind, OEG Live, which is a live streamed, unstructured, open conversation about open education and some tech stuff… really the format is gather some interesting people and just let them talk together. Live.

This week’s show came as a spin off from a conversation in our OEG Connect community starting with a request for ideas about creating Audiobook versions of OER content but went down a path that including interesting ideas about how new AI tools might make this more easy to produce. Hence our show live streamed to YouTube Friday, June 2 is OEG Live: Audiobook Versions of OER Textbooks (and AI Implications).

I wanted to jot down some things I have been using and experimenting with for audio production, where AI likely has a place, but is by no means the entire enchilada. So this tale is more about changing out some old tech ways for new ones.

Podcasting Then and Now

Early on I remember using apps like WireTap pro to snag system audio recorded in Skype calls and a funky little portable iRiver audio recorder for in person sessions. My main audio editing tool of choice was Audacity, and still something I recommend for its features and open source heritage. I not only created a ton of resources for it in the days of teaching DS106 Audio, I used it for pretty much all my media project I did over the last maybe 17, 18 years. Heck Audacity comes up 105 times in my blog (this post will make it hit the magic number, right?)/

Audacity is what I used for the first two years of editing the OEG Voices podcast. Working in waveforms was pretty much second nature, and I was pretty good at brining in audio recorded in Zoom or Zencastr (where you can separate speaker audio seperate tracks), layer in the multivoice intros and Free Music Archive music tracks.

This was the editing space:

The multitrack editing in Audacity, waveforms for music, intros, separate speakers.

After editing, to generate a transcript i used various tools like Otter.ai and Rev.ai to generate transcripts, and cleaning up required another listening pass. This was time consuming, and for a number of episodes we paid for human transcriptions (~$70/episode), which still needed some cleanup.

Might AI Come in?

Via a Tweet? a Mastodon Post from Paul Privateer I found an interesting tool from Modal Labs offering free transcription using OpenAI Whisper tech. Just by entering “OEG Voices” it bounced back with links for all the episodes. With a click for any episode, and some time for processing, it returned a not bad transcript, that would take some text editing to use, but it gives a taste, that, AI has a useful space for transcribing audio.

Gardner Campbell tuned my into MacWhisper for a nifty means to use that same AI ______ (tool? machine? gizmo? magic blackbox) for audio transcription. You can get a good taste with the free version, the bump for the advanced features might be worth it. There is also Writeout which does transcription via a web interface and translation (“even Klingon”). And likely a kazillion more services, sprouting every day with a free demo and a link to pay for more. Plus other tools for improving audio- my pal Alex Emkerli has been nudging the new Adobe tools.

There is not enough time in a day to try them all, so I rely on trusted recommendations and lucky hunches,

Descript was a ,luck hunch that panned out.

Something Different: Descript

Just by accident, as it seems to do, something I see in passing, in this case boosted by someone in the fediverse, I saw a post that triggered my web spidey sense

I gave Descript a try starting with the first 2023 OEG Podcast with Robert Schuwer. It’s taken some time to hone, but It. Has. Been. A.Game. Changer.

This is a new approach entirely for my audio editing. I upload my speaker audio tracks (no preprocessing needed to convert say .m4a to .wav nor jumping to the Levelator to even out levels), it chugs a few minutes to transcribe. I can apply a “Studio Sound” effect that cleans sound.

But it’s the editing that is different. Transcribing the audio means most (but not all) editing is done via text- removing words, moving sound around is done via looking at text. The audio is tied to the text.

Editing podcasts in Descript

I can move to any point via text or the waveform. It does something where it manages the separate audio tracks as one, so if I delete a word, or nudging something in the timeline (say to increase or decrease the gap above), it modifies all tracks. But if I have a blip in on track, I can jump into the multitrack editor and replace it with a silence gap.

But because I am working with both the transcript and the audio, but I am done editing, both are final. I’m not showing everything, like inserting music, doing fades, invoking ducking. And it took maybe 4 or 5 episodes of fumbling to train myself, but Descript has totally changed my podcast ways (Don’t worry Audacity lovers, I still use it for other edits).

You can get a decent sense of Descript with their free plan, but with the volume of episodes, we went with the $30/month Pro plan for up to 30 transcription hours per month (a multitrack episode of say 4 voices for 50 minutes, incurs 200 minutes of that). That’s much less than paying for decent human transcription (sorry humans, AI just took your grunt work)

And i am maybe at about the 20% level of understanding all Descript does, but that’s enough to keep my pod.

But it’s not just drop something in a magic AI box and out pops a podcast, this is still me, Alan, doing the editing.

Yet, if you like Magic stuff, read on.

Magic Podcast Production

Editing podcasts us work enough, but all that work writing up show notes, summaries, creating social media posts, maybe there is some kind of magic.

Well, a coffee meetup in Saskatoon with JR Dingwall dropped me intro Castmagic – “Podcast show notes & content in a click, Upload your MP3, download all your post production content.”

That’s right, just give AI your audio, and let the magic churn.

I gave it a spin for a recent podcast episode of OEG Voices, number 56 with Giovanni Zimotti (- a really interesting Open Educator at University of Iowa, you should check it out. It generates potential titles (none I liked), keywords, highlights, key points, even the text for social media posts (see all it regurgitated).

On one hand, what it achieves and produces is impressive. Woah, is AI taking away my podcast production? Like most things AI, if you stand back from the screen and squint, it looks legit. But up close, I find it missing key elements, and wrongly emphasizing what I know are not the major points. I was there in the conversation.

I’d give it an 7 for effort but I am not ready to drop all I do for some magic AI beans.

Ergo AI

I’m not a Debbie Downer in AI, just skeptical. I am more excited here about a tool, Descript, that has really transformed my creation process. It’s not because of AI and frankly I have no idea what AI is really doing in any of these improbable machines, but maybe aided by AI.

This stuff is changing all the time. And likely you out there, random or regular reader, is doing something interesting with AI and audio, so let me know! My human brain seeks more random potential nuerons to connect. And please drop in for our OEG Live show Friday to hash more out for OER, audio, and AI swirling together.

Meanwhile, I have some more Descript-ing to do. You?

Updates:

I got downsed!

Alan: The new OLDaily’s here! The new OLDaily’’s here!
Felix: Well I wish I could get so excited about nothing.
Alan: Nothing? Are you kidding?! Post 7275, CogDogBlog.! I’m somebody now! Millions of people look at this site every day! This is the kind of spontaneous publicity, you’re name on the web, that makes people. I’m on the web! Things are going to start happening to me now.

with apologies to a scene from The Jerk

I also got Jon Udell interested too…

And from Jon’s post I discovered more exciting features:


Featured Image: Mine! No Silly MidjournalStableConfusingDally stuff.

Improbable Machine
Improbable Machine flickr photo by cogdogblog shared under a Creative Commons (BY) license

Conversational Podcasting: Inspirational Moments with a Ukrainian Librarian for OEG Voices 51

By: cogdog

Toss together equal portions of luck, fortunate, serendipity, and a sorely needed dose of genuine humanity all went into the mix of the most current episode I am just blessed to click buttons for the OEG Voices Podcast I have been doing for Open Education Global.

This was easily more than just a podcast, this was a moment of sheer positivity that seems more rare these days. I don’t think most of my colleagues truly grasped how powerful a thing we had made possible, simply by offering an invitation to talk, without script or structure.

I’ve already alluded to this episode in my rush of excitement to be part of a series of live, unstructured events for Open Education Week. On the middle day of the week, that just so happened to be International Women’s Day, we had coordinated a conversation with Tetiana Kolesnykova, Director of the Scientific Library at the Ukrainian State University of Science and Technologies, made possible by librarians Paola Corti and Mira Buist-Zhuk (I remain in awe of Mira for her super heroic translation skills to go back and forth between me in English and in Ukrainian for Tetiana).

I had suggested setting this up maybe 2 weeks prior in an email to Paolo who had invited Tetiana who had said she would be there “if she had sufficient electricity.”

Let that one sink in.

Now I am tempted to describe it all over again, but it’s more or less been blogged already by me, and you get as well the full audio of course, transcripts in English and Ukrainian, but mostly, take the time to listen to Tetiana tell how she and her colleagues managed to keep their university mission alive through a war time invasion– just a year ago.

Just to summarize, just three weeks after bombs fell on Dnipro, Tetiana and her colleagues put into operation a crisis plan developed during the pandemic, organized how to provide all kinds of support, including course, library, and research, and she and her staff were at their library just 3 weeks later carrying out this heroic effort.

And it was not like Open Education had to swoop in to offer the OER goodies as a new offering of benevolence; Tetiana and the Scientific library had been practicing, facilitating open access publishing, OER awareness since 2009.

I could not be more honored to just have this time, and in fact, after an hour when I offered and out, Tetiana wanted to keep talking.

After I had published the episode, I drafted an email of thanks to Tetiana, relying on Google Translate to try and turn my words into Ukrainian. She replied (in turn I think by translation):

Hello, dear Alan!
You made me and my family extremely happy people late last night!

In my previous life (before the war), I would never have thought that I would be a part of such a wonderful international project. In addition, you created a very cozy and friendly atmosphere in which I, as a guest, felt very comfortable.

At the beginning of the meeting, I was very nervous because: firstly, I didn’t have such experience in recording; secondly, I didn’t have time to prepare; and thirdly, I didn’t know what questions you would ask me.


But your kindness and sincere support, the enormous help of Paola and Mira, as well as the pleasant faces of Marcela and the other participants in your online studio, removed all barriers.

Thank you very much, Alan!
You, along with Paola and Mira, gave me wonderful emotions!

Alan, my colleagues and I (librarians, teachers, researchers) are also very interested in creating opportunities for collaboration. I would be happy to bring your suggestions to them.  I look forward to it.

Thank you very, very much to you, your friends in the studio, your family and everyone who supports Ukrainians in this terrible war.


Your help is invaluable.

email from Tetiana Kolesnykova

I remain firmly convinced that open education is often too focused on the stuff- the resources, licenses, courses, platforms, when really, the most important factors are just being able to have human conversations and connections like these.

Just sit down and say ??????.


Featured Image: My own combination (no artificial intelligence even allowed) of a screenshot of the Ukrainian State University of Science and Technologies web site, a screenshot of the zoom session where we recorded the podcast, and 2011/365/63 On The Air flickr photo by cogdogblog shared under a Creative Commons (BY) license

❌