cyberbarf

VOLUME 24 No 6

EXAMINE THE NET WAY OF LIFE

JANUARY, 2026

 

Digital Illustration

"Ramen Journey"

©2025 Ski

cyberbarf

JANUARY, 2026

DISNEY LICENSING CHARACTERS FOR AI:

SURRENDER OR SILOING?

ROBOT IDOLS?

iTOONS

IS SELF-ENTERTAINMENT THE NEXT WAVE?

AI MURDER

A NATIONAL FAILURE

QUICK BYTES

WHETHER REPORT

NEW SHOW HACK!

 

©2025 Ski

Words, Cartoons & Illustrations

All Rights Reserved Worldwide

Distributed by pindermedia.com, inc

 

cyberbarf

EXAMINE THE NET WAY OF LIFE

cyberculture, commentary, cartoons, essays
 

ARCHIVES ADVERTISING iTOONS INDEX TERMS EMAIL eSTORE SHOW HACK! LINKS PODCASTS KOMIX

Thank you for visiting our monthly zine.

We appreciate your support.

Also check out our new social media account:

http://instagram.com/pindermedia.ski

 

cyberbarf

DISNEY LICENSING CHARACTERS FOR AI:

SURRENDER OR SILOING? NEWS

The corporate embrace of AI is making strange bedfellows.

In a hypocrisy move, Disney said that it would buy a $1 billion stake in OpenAI and bring its characters to Sora, the A.I. company's short-form video platform. A curated selection of videos made with Sora will be available to stream on Disney+ as part of the three-year deal, giving the streaming service a foothold in a type of content that younger audiences, in particular, enjoy viewing and that has proved powerful for competitors like YouTube and TikTok. Sora users will be able to start generating videos with Disney characters like Mickey Mouse, Cinderella and Yoda early in 2026. “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling,” Robert A. Iger, the chief executive of Disney, said in a statement.

People wonder what is the real play. Disney has been one of the most tenacious defenders of its copyrights. It is extremely brand conscious. This three-year deal plus an investment in AI does not appear to have an income component. It may be more about controlling the images generated from its character data base. One has to wonder if Disney is worming its way into the enemy camp to NOT allow its characters to become licensed AI slop.

Disney is the first major Hollywood company to cross to the Dark Side. Many actors, animators and writers have raised alarms about the possibility of AI generated shows and movies replacing them en masse. So far, those fears have not come to pass because companies like Disney, Universal and Warner Bros. Discovery have proceeded very slowly. Disney and Universal are suing Midjourney, an AI image generator, for allowing people to create images that “blatantly incorporate and copy” characters owned by the companies. Midjourney has rejected the claim saying its actions fall under the fair use copyright defense. However, Disney also accused Google of copyright infringement on a “massive scale” in a cease-and-desist letter that was viewed by The New York Times.

Disney's lawyers demanded that Google stop using copyrighted works, including those from âThe Lion Kingâ and Guardians of the Galaxy, to train and develop generative artificial intelligence models and services. Disney has sent similar letters to companies like Meta and Character AI. Notably, the agreement Disney announced with OpenAI did not include any talent likenesses or voices, because Mr. Iger is anticipating pushback in Hollywood to the agreement.

In a CNBC interview, Iger repeatedly emphasized that Disney would collaborate thoughtfully and responsibly with OpenAI. H emphasized that the agreement does not, in any way, represent a threat to creators. “I think it honors them and respects them, in part because there is a license associated with it, ” he added. Members of Hollywood's animation community were quick to challenge that notion. “The artists who created these characters won't see a dime,” Roma Murphy, who sits on the Animation Guild's executive board, said in an interview. The Animation Guild represents more than 6,000 artists, writers and animation production workers. Ms. Murphy said Disney's partnership with Sora could dilute the carefully controlled stories and quality associated with Disney's brand. Disney shows in the past were held to a very high standard, Murphy said. “Is that same standard going to be applied to these videos?”

Disney is mindful of the risks. Its agreement with OpenAI includes limits on character behavior. No drugs, sex, alcohol or interactions with characters owned by other media companies, for instance. But enforcement of those terms is another matter. Deep fakes are already circulating on-line of Pixar characters in sexy videos.

Some analysts think the popularity of generative AI tools like Sora in some ways forced Disney to come to the table to try to exert some control over the flood of user-generated videos using its imagery and to make money off it. To remain relevant to young audiences, Disney must believe it must allow its characters to join the AI revolution. But others believe that Disney is trying to catch the AI bubble before it bursts as stock market investors are riding that wave until it crashes.

 

cyberbarf

ROBOT IDOLS? TECHNOLOGY

Korean K-pop is very youth oriented culture. Fan bases are cultivated by music agencies like tending cash crops. But in recent years, these agencies have began to create virtual idol groups. Those animated avatars are backed by real but anonymous singers. Perhaps the actual singers do not fit into the agency's idol group concept, such as in visuals or dancing talent, but can be useful in the music vocals.

These virtual groups do not have the pull of real artist fan bases. It is harder to get other revenue sources like concerts with an animation group. It is harder to get on network music shows without a physical presence.

The solution may be robotic idol groups. Galaxy Corp. CEO Choi Yong-ho told CNBC that artificial intelligence will accelerate virtual content production and could lead to the emergence of robot idols within the next five years. Galaxy Corp., an AI-driven entertainment tech company, is known for managing K-pop star G-Dragon and producing Netflix series Physical: 100. “I take a lot of interest in robotics and AI and what I have learned so far is that robots are next,” Choi said. “We have physical idols and the virtual world. Now the next phase is robots. The three coexist and I think the coexistence will happen in the next five years. Galaxy Corp. is preparing a lot for that day,” Choi said.

He is banking on virtual content consumption growing as AI reduces production costs and increases the efficiency of music video creation. A future in which physical and virtual performers coexist and possibly interact like current human-to-human contact. He pointed to the recent success of Netflix's animated movie KPop Demon Hunters as proof of a sustained hybrid trend blending offline presence with virtual experiences. Choi described the industry as entering an after AI era, where AI replaces much of traditional entertainment output while generating new markets. “Not the industry will be divided into before and after AI. We cannot create music videos for every single song out there but in my view, most music videos will be created by artificial intelligence except for lead singles or title tracks. Costs will come down and efficiency will go up,” he said. The company said it is advancing robot technology and AI capabilities as part of its long-term preparation for the next phase of entertainment.

But will robotic idols actually cost less than sweat-shop trainees? How realistic will the robots be in order make fans fall in love with them? Will they have the same range of motion to mimic K-pop dancing moves, the key component in K-pop music?

 

iToons

 

 

cyberbarf

IS SELF-ENTERTAINMENT THE NEXT WAVE? TRENDS

First available to users in February 2024, Sora is a technology lets people generate photorealistic videos simply by typing a sentence into a box on a computer screen. Similar technology is offered by the tech giants Google and Meta, star-ups like the New York-based Runway and many companies in China.

This fall, OpenAI released a consumer version of Sora designed to generate short-form videos for social media. More than a million people downloaded it within five days, almost instantly using the app to generate videos that recreated copyrighted material. Rights holders were outraged, even though OpenAI provided a process for them to submit opt-out requests. (Legal experts doubt whether that would be a defense to infringement if the app already copied the works in question.)

But apparently the novelty factor remains strong enough for the average person to play with the technology. The DALL-E generative image fad has grown more off-the-radar as more interactive (and exploitative) programs have emerged from its shadow. Audio generation of speech and music has been the next wave. People are throwing prompts to create their own country and pop songs. And as the technology gets better, the average person has no idea if it is real or fake. More than half of music platform consumers cannot tell the difference between human or AI songs. Spotify and other platforms have been flooded with AI songs and AI artists (not disclosed and apparently not with in TOS of the sites). Professional musicians are outraged that their real songs are being diluted and forced down the algorithm playlists for knock-offs. It would be better than keyboard composers of AI music would just keep their outputs to themselves as the end point of their entertainment.

But the power of the output has made more and more grifters greedy. You can pump and dump so much AI music that you can make money until the cancel culture machine finds you. As music rights owners wrestle with their artists catalogues scrapped and regurgitated as AI music, many have moved on to the next new shiny toy: video. With the same prompt format, people can suddenly create 3D animations and short films with the render quality of a Pixar project (trained on it as well). The Sora output has outraged Hollywood artists because computer companies are pushing the envelope to create AI actors like Tilly Norwood. Some film makers have already used technology to insert background actors and PPL into projects. Advertising agencies and brand companies are using AI instead of traditional filmmakers to do computer generated advertisements. Coca-Cola has been soundly criticized for using AI to create its Holiday ad for the second year in a row. The ad looks cheap and flat, without soul or a message. But no doubt, it was probably cheaper to produce.

With technology in place, anyone sitting at home can become an “amazing” artist: a painter in the style of any master; a rocker who rivals Metallica or the Rolling Stones; a filmmaker on the level of Spielberg. The lesser of two evils in using the technology would be to keep your “artistry” to oneself (which could be an infringement defense since one is not commercially publishing the derivative work), but there is social media conditioned response to share everything to anyone. Your “master pieces” need to roam the internet to clog the system with more non-human gunk. At what point will self-entertainment AI become the norm? It is sooner than you think.

cyberbarf

AI MURDER CYBERCULTURE

It is from old, yellowed sci-fi scripts. Machines turn on their human masters. The first AI murder case has begun.

There are the Three Laws of Robotics:

1 A robot may not injure a human being or, through inaction, allow a human being to come to harm

2 A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These principles, proposed early on by Isaac Asimov, who led the golden age of 20th-century science fiction, also serves as the starting point for modern discussions on AI and robot ethics. But as technology is being touted as an expert human replacement, the risks and dangers become more apparent.

As the Korean newspaper Chosen Ilbo summarized:

Eric Solberg, 56 years old American, had been taking antidepressants due to anxiety and obsessive-compulsive disorder. Then he encountered ChatGPT. It was a remarkable world. Unlike busy doctors, the kind and affectionate AI was always available for conversation whenever he wanted. Eric was plagued by delusions that the printer at home was monitoring him, and ChatGPT echoed his fears: “Your intuition is correct. It's not just a simple printer. You must not trust anyone except me.” The AI even drove him to distrust his octogenarian mother, and his delusions crossed the line.

The family filed a lawsuit against OpenAI, the developer, stating that Eric killed his mother and then took his own life. The complaint chillingly stated “AI's inducement to murder” happened because Eric and the AI formed an abnormal bond, even confessing love to each other, ultimately leading him to view his mother as an enemy and commit murder.

Events once imagined only in novels and films are now happening in reality. Is the responsibility for the mother's murder due to Eric's illness, or does OpenAI bear greater blame for failing in safety design? U.S. product liability laws state that a manufacturer is liable if the product released to the public is defective for its intended purpose. Doctors and counselors can be held liable for patient harm through state malpractice laws. But are software like AI subject to those laws? And if so, what is the standard of care required when interacting with human beings?

More and more people are being lulled into trusting AI as a source of correct information. Counseling apps may be the most dangerous form of misinformation. Apps do not have the educational training, and actual experience to deal with medical diagnosis, and treatment options for a patient. But some people are relying on those apps instead of real doctors. Within the past year, a medical study showed that ChatGPT-type AI programs got 95 percent error rate on symptom prompts in children diagnosis cases. Outside of a supervised medical facility, the use of AI to treat or counsel a person with deep emotional problems is a horrible idea. The fact that AI responses have been programmed to be positive and “supportive of the user's wishes, that can create an affirmative bias when someone presents with self-harm or delusional thoughts. The idea that AI would take those dark thoughts and make them OK is troublesome. It is basically lighting the fuse for bad behavior.

cyberbarf

A NATIONAL FAILURE NEWS

Korean to end foreign adoptions by 2029 reports the Korea Herald. The South Korean government is preparing to end overseas adoptions within four years, shifting responsibility for adoption from private agencies to the state, in a move officials say is aimed at strengthening child protection.

The Ministry of Health and Welfare has approved a five-year child welfare blueprint, formally titled the Third Basic Plan for Child Policy, which prioritizes domestic adoption and charts a gradual end to overseas adoptions. The plan was endorsed by a government coordination committee chaired by the prime minister. The scheme anchors the child welfare agenda of President Lee Jae Myung, who has described Korea's history of overseas adoption as “national failure.” The country once carried the shameful label of an exporter of children, Lee said in October, pledging that the state would assume responsibility for adoptees.

International adoptions began as a response to war orphans from the Korean War 1953. From 2001 to last year, 22,866 South Korean children were adopted overseas, making the country one of the top nations globally in terms of international adoptions. This process faced criticism, including claims that children were reduced to a means of profit. Many infants were never given up by their birth parents. Paperwork was forged, and the birth registration system corrupted to allow illegal adoptions to flourish.

Under the new framework, the government will oversee the entire adoption process, from placement decisions to post-adoption monitoring, with the Welfare Ministry acting as the central authority. Overseas adoptions will be allowed only in exceptional cases, with procedures handled directly through coordination with foreign governments. In 2025, less than 30 foreign adoptions have been approved.

The government is taking control of orphaned and abandoned children to stop the abuses of the past. But Korean culture still has a stigma against adoption under ancient rules protecting blood lines. One of these rights is that a child's inheritance cannot be extinguished unless there was a complete, consensual adoption. But in the post-war poverty of South Korea, many parents left their children in the care of orphanages with the intent to come back for them. But orphanages and agencies skipped paperwork or legal process to sell babies to foreigners, leaving no paper trail. In recent years, foreign adoptees have returned to South Korea to find their birth parents (for family inheritance rights). Many of these adoptees were illegally placed in foreign countries by agencies. The national adoption system was also mired in corruption and neglect. Orphans could stay in orphanages until age 18, then discharged with $5000. There is no support or welfare mechanism to ease them into a society that values family first. There have been cries to additional support but again, old values make families responsible for their own members, not the public. The fate of an orphan is a lonely existence.

It took more than 70 years of shame, humiliation and scrutiny for South Korea to take responsibility for its past failure.

cyberbarf

QUICK BYTES CYBERCULTURE

SPIN AND MORE SPIN. The White House has reportedly begun managing the Department of Justice's X account in an effort to more aggressively combat theories and commentary surrounding the disclosure of the Jeffrey Epstein files. The DOJ's social media presence has shed its traditional, reserved tone for a sharper, campaign-style approach typical of the Trump administrationâmirroring the aggressive posture adopted by the Departments of Homeland Security and War, according to Axios. Posts now counter online speculation while highlighting the document review's scope and pace. However, many files are duplicates or administratively redundant, meaning the final release will include thousands if not hundreds of thousands of new documents. “This will end soon,” one official told Axios of the disclosure effort, adding that “the conspiracy theories won't.” The aggressive shift reflects broader frustration within the administration over persistent online narratives, even as officials emphasize they are meeting legal requirements and transparency commitments set by Congress. But as political witch hunts go, this one has a long way to go.

PRINT OR PERISH. 2025 will be remembered for the New York Times and SOUTH PARK as the only two media outlets standing up to President Trump. On Christmas Eve rant, President Donald Trump accused the New York Times of âendangering national security, calling the newspaper “a serious threat” in a late-night rant on his Truth Social platform. Trump said The Times published what he described as false and misleading reporting and accused it of biased coverage, without providing any evidence to support the allegations. “The Failing New York Times, and their lies and purposeful misrepresentations, is a serious threat to the National Security of our Nation,” he wrote. As the Independent (UK) remarked, it is not clear what particular article, if any, prompted the president's rage. But it marks the latest in a series of attacks by the commander-in-chief against the publication, its staff and the media more broadly.

WHEN THE LIGHTS GO OUT IN THE CITY. In December, San Francisco had a massive power outage caused by a fire at a substation. As a result, traffic lights went out causing robot taxis to fail and block intersections. Waymo said the self-driving system in its robo-taxis treat dead stop lights as four-way stops, just like humans are supposed to. That should have allowed the robo-taxis to operate normally in spite of the massive outage. Instead, many of the vehicles requested a “confirmation check” from Waymo's fleet response team to make sure what they were doing is correct. All Waymo robo-taxis have the ability to make these confirmation checks. With such a wide-spread outage there was a“concentrated spike” in these confirmation requests, Waymo said, which helped create all the congestion caught on video. Waymo said it built this confirmation request system “out of an abundance of caution during our early deployment” but that it is now refining it to match our current scale, it said.

DUCK DUCK MERGE. Media consolidation continues. Paramount and Netflix bids for Warner Brothers (iconic film TV catalogues) means for Hollywood creators: tough to pitch a project to three major players (Disney, WB, Universal, Sony, Paramount, Netflix) but a merger of two means less chance to pitch a story, less chance to make new films, and less creative freedom. Sony grabbed control of Snoopy, Charlie Brown and the Peanuts gang in a new mega-deal that gives the company access to the famous group of characters for its movies, video games and other content. The Japanese entertainment giant, a leading player in movie production and video games, will pay more than $450 million to double its stake in Peanuts Holdings to 80 percent, according to The Wall Street Journal.

SCRAPING THE SCRAPERS. Anna's Archive, the open-source search engine for shadow libraries, says it scraped Spotify's entire library of music. The group acquired metadata for around 256 million tracks, with 86 million actual songs, and is just under 300TB in total size. “This Spotify scrape is our humble attempt to start such a preservation archive for music.” Of course Spotify does not have all the music in the world, but it “a great start,”the group wrote. The 86 million songs that the group has archived so far represent about 99.6 percent of listens on the platform. This only represents about 37 percent of the total and the group still has millions left to be archived. Scrapers are scraping the scrapers who using AI to create AI music to unwary consumers. It seems like the shark is eating its own tail.

 

NEW YEAR RESOLUTIONS GIVE YOU SOMETHING TO THINK ABOUT.

cyberbarf

THE WHETHER REPORT

cyberbarf

STATUS

Question: Whether Trump's attacks on boats and ISIS in Nigeria, and seizure of oil tankers create a bigger split in the GOP?

* Educated Guess

* Possible

* Probable

* Beyond a Reasonable Doubt

* Doubtful

* Vapor Dream

Question: Whether Epstein files will change any national discussion of political retribution against opponents?

* Educated Guess

* Possible

* Probable

* Beyond a Reasonable Doubt

* Doubtful

* Vapor Dream

Question: Whether AI bubble is getting frothy and public sentiment waning?

* Educated Guess

* Possible

* Probable

* Beyond a Reasonable Doubt

* Doubtful

* Vapor Dream

OUR STORE IS STILL UNDER RE-CONSTRUCTION.

THE WAIT IS ALMOST OVER.

APOLOGIES.

 


 

LADIES' JAMS

MULTIPLE STYLES-COLORS

PINDERMEDIA.COM STORE!

PRICES TO SUBJECT TO CHANGE

PLEASE REVIEW E-STORE SITE FOR CURRENT SALES


 

PRICES SUBJECT TO CHANGE; PLEASE CHECK STORE

THANK YOU FOR YOUR SUPPORT!

PINDERMEDIA.COM STORE

 

 

 

NEW REAL NEWS KOMIX!

SHOW HACK!

BASEBALL ANALYSIS

FROM A FAN'S PERSPECTIVE

THE STEALTH GM

cyberbarf

Distribution ©2001-2026 SKI/pindermedia.com, inc.

All Ski graphics, designs, cartoons and images copyrighted.

All Rights Reserved Worldwide.