Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
YouTube will show labels on videos that use AI (9to5google.com)
136 points by mikece on Nov 14, 2023 | hide | past | favorite | 97 comments


I have mixed feelings about this.

For the past few months, I've been marking videos as "not interested" because they are AI-generated, and I can tell.

But the flip side is that as these tools become more prevalent, it's not immediately clear to me how this line will be defined.

If people are using AI to generate scripts but are still reading them, does that count? Or if they're using AI to generate the images but have written the script, does that count?

It just seems messy, but I'm glad they're taking at least an active approach to it. I also think it will be a sign of how Google as a whole will treat AI generated content over time.


"it's not immediately clear to me how this line will be defined" I'm myself struggling with this as well, even more as a creator, than a consumer.

I just built an AI-generated fakenews app [1] (for fun) and it opened my eyes: we're playing with fire.

The tech is already there: a bit of roop (deepfake) + SadTalker (lipync) + chatGPT, etc and voila! Anyway can create realistic videos / music on the fly! It's both thrilling and terrifying.

AI's involvement in media production isn't just a technical footnote; it's a fundamental shift in the landscape of information and creativity. Just like we scrutinize the origins of our food, we need to dissect the genesis of our media.

What YT is doing here is a first small step. It's time for all tech giants to confront this reality head-on. We're at a crossroads, and the path we choose will redefine our relationship with technology, creativity, and truth.

[1] https://fakenews.me


> I just built an AI-generated fakenews app [1] (for fun) and it opened my eyes: we're playing with fire.

Thats brilliant. You have to share details on how you built it!


The backend is made in bubble.io, and it uses a bunch of AI models via APIs: elevenlabs for the text-to-voice, some lipsync models (via replicate), a bit of deepfake (roop) for the host, a bit of chatGPT for the script generation


Seems your site got hugged to death. Was interested in one of the 'random' generated items it could put out.


I think if you're asking those questions then you're ahead of where 99% of people are thinking about when they think about "AI" (as are most of the folks on this site by selection bias). I think as these tools mature and get included in more standard tools like Adobe is doing then the distinction will blur enough that there will be some new criteria. And at that point maybe people won't care enough to have that distinction. But right now, people care a lot and its (mostly) obvious enough, hence the policy.


It's pretty obvious if you click the link:

> We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.

It's for faked realistic videos. Scripts are unrelated.


It's also for self-reporting? That means the worst won't bother to indicate that they're generated.


That means that when you report them for generating fraudulent data that's not clearly signposted, they can be banned.


No, but it helps legitimize the hobby for people making covers or gamer presidents.


what if you are writing the scripts and using text to speech to read the scripts, but the text to speech tools now have AI baked in them.

what if you are animating the content but using some tool with AI components in it.

Not only does it seem messy but in the long run not feasible.


The issue is mostly YouTube doesn’t want 10,000 people all hooking up text to speech feeding in every single fanfiction.net story or similar websites and then auto generating 10 billion hours of HD video that nobody is going to watch except by accident. Let alone people doing the same thing with pure AI songs etc.

Photoshop using “AI” in its clone tool doesn’t run into those kinds of issues.


Honestly, I don't think web platforms care about the artistic integrity or anything like that. It's more that they don't want to have to be the storage destination for anyone that can figure out how to hook up a video generator to a while(). This segment of the userbase has the ability to grow to be 99% of your resource usage overnight, and with video being the most expensive form of media, it just isn't practical to welcome them with open arms.

See also: why no consumer backup platform offers unlimited quantities anymore. It only takes like a couple hundred hoarders to bleed you dry, and those guys don't even stand to profit from the activity like the get-rich-quick youtube and kindle schemes are promising.


When the Russo-Ukrainian war started, there was a glut of AI-driven 'news' stories on youtube. They had some details correct, but it seemed like a grab-bag of random events from the last month rehashed as a new news story. The videos often had thousands of views, despite the fact that much of the story was fictitious. I tend to not watch anything that isn't at least narrated by a human anymore. If someone didn't invest the time to make it, it probably isn't worth investing the time to watch it.


There were also deep fake videos of the Ukrainian president going around meant to demoralise the Ukrainian society and the Ukrainian army, specifically.


Let's face it, the Zelenskyy deepfake at the time was hilarious https://i.imgur.com/XSRIBz2.jpg


People have been asking where the line is.

- I think a video with a synthetic actor (even when it's cloning a human) is synthetic and should be labeled as such, whatever the provenance of the script

- I think a video with a human actor, but an AI-written script could also be labeled. The line is blurrier there for me, since some folks have very elaborate prompts which basically amount to "here's my first draft, make it better". But having a straight-up rule is still good. False positives are better than false negatives here.

- And then there's the issue of AI-generated translations (which is what we do at my startup[1] ). I do believe it's fair for viewers to have those tagged as well. And to be able to track provenance to avoid deepfakes.

[1] https://www.onetake.ai


I'm not sure if TTS is included in your first point, or if you meant a fake visual likeness. But I really want to see TTS labelled because it's so good now a lot of people don't know they are listening to a robot (and it's only going to get better). I know a lot of replies would say "well, what's the harm then?" and that's the insidious thing about A.I. is there isn't usually direct harm, but I think people have a right to know they are living in a fake world and be free to opt in if they wish.

On many of the fake channels, I see comments praising the fake actor for reading the stolen academic material (and ironically many of these comments are likely fake, too!).


Is having a random guy on fiverr read a script without further context significantly less "fake" than TTS?


Yes.


The problem is that the label will become meaningless when it's overloaded.

If everything that is touched by AI needs to be labeled, then every piece of content will be labeled as AI. If you have a 3 hour recording from Twitch with TTS in the middle, then that would get labeled as AI. Even if they didn't have to, nobody's going to check a 3 hour recording to see that they didn't get a TTS notification.

Put a photo from your phone into a video? Edited by AI.

Put a video from your phone into the video? Edited by AI.

Soon photo editing tools won't even make it obvious that they use AI. Is that "content aware fill" AI? Or is it something else? Will the average person even know?


I think there's a significant difference:

- between "grammar- and spellchecking", vs "having ChatGPT write the script"

- between "editing your pic with filters", vs "having MidJourney create the pic"

- between "translating your video with TTS", vs "having a synthetic clone of you speak a script from scratch"

The difference is qualitative rather than quantitative. In each case, one is AI-augmented, the second is AI-generated.


I think one thing I look forward to is editing my voice to sound different. The words and intonation can be kept, just make me sound like a different person. I hate listening to recordings of myself, and that's literally the only thing stopping me from becoming a Youtube creator. I don't know whether that would be classified as "AI".


The line is wherever content creators draw it:

> YouTube will...require that creators disclose the use of AI in a video


What good is putting a "uses AI" on every single YT video exactly?


I was thinking the same. What does "use AI" even mean? For example if you ask Bard for video theme ideas, did you "use AI?"

And as time goes on this line is going to get even more murky with AI creeping into software like Microsoft Word, video editing suites, Photoshop, et al.

Based on the description it sounds like this has nothing to do with "AI" and is more flag videos that artificially create events that may not have occurred or as some would call it "fake."


AI is closer to becoming synonymous with fake :/


I mean artificial flavor is associated with fake flavor, so it's not much of a surprise.


It's also a label that means nothing in practice, just like "uses ai".


I have a channel and have already pledged to make it 100% AI free in the following sense: I don't use AI tools to generate any content, period. I put a badge on my channel as well that says "100% AI Free".

More specifically it means: no generative video AI, no AI to write scripts, no AI period. To ** with AI.


So, how do you deal with the ML algorithms between your camera CCD and the disk? There’s all sorts of stuff that converts from raw to rgb, motion compensation in the compressor, etc.

The audio path is just as bad, but more fundamentally, how did you opt out of Google’s recommendation AI, and their practice of harvesting view information for ad targeting? Similarly, can you disable their close captioning stuff, and prevent them from using your videos for training?


Hi. I am talking about the fundamental creative aspect of MY role in making videos. So, generative AI. I am NOT concerned with the typical red herring of defining AI precisely. I am talking about taking a hard and cautious stance towards it, which is far more societally useful than trying to define it mathematically.

So, like I said, I avoid using AI myself for generative, creative tasks.

By the way, the sensor is CMOS and not CCD.


But, you're OK using it for content distribution, promotion, ad targeting, and surveilling your audience?

Some of that is still stuff you do, but it's stuff you probably do off camera.

It's an extremely fine line.


Not exactly ok. I am looking for ways to completely get off Google because of its recent developments.

Yes, it is a fine line, but that does not mean it's better just to bury my head in the sand and ignore it rather than challenge it like some many technologists do.


No computational photography/videography?


Not for creative tasks. My photography is fairly low-tech. Hardly any AI. Of course, if some machine learning algorithms are used to design my camera or stabilize my video, well I can't avoid that. But I am avoiding AI for any creative tasks including denoise AI, etc. (I don't use denoise AI or ANY phones.)


This leads to a tar pit of 'What is AI?' It's fairly clear that none of the current batch of AIs have a sense of awareness. So obviously that isn't the threshold that you are using, How about smart-resize, noise-removal, spell-checking, Upload time estimation?


I disagree. Like I said to the other poster, I am not concerned with the precise definition of AI, but rather not using it for writing scripts, generative photos or video, etc. I have clarified this on my channel but the meaning is clear enough: no AI for any creative tasks.


Headline made me think they were going to 'detect' AI use and display a label, but it's that creators will be required to 'disclose' AI-use. Good luck.


"AI" is very hard to define.

Is noise cancellation AI? Is my use of GPT to reword one sentence of a speech AI? Is a face-detection autofocus system AI? Is automatically fixing my hair and removing a pimple or two AI?


From the article it's much clearer.

Paraphrasing, it's using AI to produce visual/audio/environmental deepfakes in a seemingly factual context.

E.g. a comedy sketch doesn't need to be labeled, but if you're putting words in a politician's mouth or adding 12 more rockets to a video that only had 2, then yeah.

In other words it's exclusively to combat misinformation and disinformation.


I'm not on Youtube as much as the pandemic today, I feel like that was my peak watching of various content creators who pump out decent content regularly. But if I jump on Youtube today and half of my recommended feed becomes listed as AI, I'll likely stop going there altogether.

If I start going on Youtube and watch a bunch of content and my spidey-sense goes off that a lot of these videos are AI-generated (without labels), I'll likely stop going there altogether as well.

And this is not a direct bash on AI-generated content. I think the tech is going to be immensely helpful for all sorts of stuff, including youtube video content creation, but I'm dreading this early adoption period where people pump out low quality junk. I'd rather just avoid it and do something else. I'm sure Youtube is aware of this and is trying to figure out how to control the problem, but the labels just aren't going to help me continue to go to Youtube.

Tough problem, I don't envy the folks at Youtube trying to figure this pickle out.


Generative video inherently isn't bad in all cases.

Let's say it's a subject matter expert (maybe another undiscovered Dr. Huberman) with lack of video production skills to some very custom content that is unique with a subject matter expert.

Being able to explain things very well to create beginners in that case would be greatly benefitted by generative video.

On the other hand, if there is a lack of quality content, and a lack of video editing skills, and it's about ad revenue passively being generated, the new seo optimization... and that kind of fluff can probably be filtered out.


Yea, I agree on the whole. It seems like there's a lot of potential in the space. But I also recognize that as you lower the barrier to entry to the few quality creators, the barrier is also lowered to the masses who either have no sense of quality control, or worse are just trying to maximize their ad revenue.

I can choose not to watch this stuff, but I think if it starts to trickle too much into the content I want to watch it just won't be worth the effort of trying to weed low quality junk out.


Is Dr. Huberman not a complete and total quack like I’ve assumed?


So far it is mostly presenting the science and facts in a pretty simple way to understand for the many (instead of the few) with a healthy encouragement to decide what works for you.

For me, I’ve just discovered we may not hear from enough neurologists in life when specialists can only explain their area, the perspective of the brain is sometimes what I was after all along.

Valuing brain health alone has been beneficial. You may find brain health either by him or someone else is a really beneficial area to spend more time around. I certainly wouldn’t if that information wasn’t reaching me how it currently is.


I've only ever watched his videos on fitness (weight training, recovery, endurance), substances (alcohol, weed, nicotine, caffeine), and sleep.

He has some amazing and very knowledgeable guests on for his fitness videos. The substances and sleep videos also seemed to be very high quality.

Maybe he does participate in some quackery, but everything I've watched seems sound.


Only on HN lol


My solution is to only watch videos from my subscriptions. High quality content curated by myself. I wish Youtube itself had a way to group subscriptions by theme, I mostly use NewPipe for that reason.


I agree going with brands you trust is the best solution. Discovering new, quality stuff will remain a problem. And really anything Youtube tries to do will get gamified by content creators trying to turn a quick buck.


I am waiting for the equivalent of web SSL certificates for video. Each N frames could be signed with a cert. This way a content producer could publish a video and video clients could verify the source of the video frame-by-frame to verify if a video has been manipulated. It could go as low level as the cameras signing frames as they record. Perhaps having videos with a verifiable source could help people verify non-AI videos. Sure content creators could publish their own AI video and sign it, but perhaps the technique could make manipulating existing video more obvious.


Point a camera at a monitor showing AI generated video and you now have signed video of AI content.


Anyone could see what cert signed it though. Sure anyone could sign video, just as anyone can have a site with SSL, but it would be easy to verify that the signing cert isn't the same. For example, if some news org shot footage with some of their cameras or produced some video, the video player could show which certs were used in editing and to capture the video. If another party tampered with the video, the signed frames could reflect that, as the other party would not have the signing certs.

This is all hypothetical vaporware of course. Just saying I wonder if a video encoding solution like this could help people have faith in the video content they see.


An awful lot of people here arguing over "WHERE IS THE LINE?" when the article clearly states:

When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.


"USDA Organic" labels for digital content, this will be a bigger and bigger thing.


This will be good to identify and eliminate the easiest and laziest uses of AI to do the new form of content farms.

What might remain hard?

Hiring professional voiceover actors, and still doing actual video editing instead of rendering.


Then they will train their AI to generate better, more constructive videos with better graphics and sound. Then mission fucking accomplished.

https://xkcd.com/810/


Haha love the xlcd.

The cat and mouse won’t stop, just the floor of what is tolerated will keep rising.

I’m sure there’s people so far ahead in ai video for no reason other than to quietly get ahead and stay ahead.


Are they going to also label all the ai generated/selected search results and ad targeting decisions they produce themselves?


This is a bandage, not a cure.

It won't work: Truth can only be healed with light.


Excellent news - we need to label ai generated news, movies, code, and so on. We also need a way to prevent data grabbing - perhaps DRM for text and images.


I think you may mean a way to prove provenance, not DRM

There is good prior art happening in the software supply chain, the problem for media content is that you want something like a hardware signature created before software enters the fray


DRM has failed consistently in the past. Why would it work this time?


I dont know - but a way to protect content should be implemented. Otherwise what’s the plan? Let corporations harvest ip and open source code then resell it, while we cant do the same with theirs? Needs to be a level playing field.


Sounds like the "something must be done, this is something, ..." fallacy. What if it's not actually possible to "protect content" the way you want and every attempt to do so just makes it harder for the little guy while doing nothing to stop those corporations you're worried about?


But then the little guy will have to live with lower quality content because if i am not incentivised to create it then i wont. So then you’re left with the spam and ai generated nonsense.


Abolish copyright and then you get to do the same with theirs, level playing field ;)


Repealing intellectual "property" laws would level the playing field.


What do you mean by "protect content"?


Have you read the Biden EO on AI?

The open source AI community has a lot more to worry about than just trying to maintain a level playing field with corporations.


Corporate ai is going to be useless if not trained against quality content. That means blogs. If you can ban microsoft’s openai from taking your content for free and you allow only open source projects with attribution to do so then the advantage is yours. DRM wouldnt be effective against non corporate, but against corporate you’d have proof that they took something which you explicitly didnt allow - cant claim “dont put it on the internets if you dont want it stolen”.

Also open source licensing should change to prevent them from taking advantage.

Plenty of non ai open source code that’s been taken for free by billionaire corporations that give nothing back, yet more, they order workers around.

Ai can be a tool to replace exactly those that wish to replace everyone else.


> DRM for text and images

Please, no. We need less IP, not more.


> DRM for text and images

Ew, ew, ew, ew, ew. No! I hope that was sarcasm. DRM for everything else so far is already a mistake. Do not put that evil on me, RickyBobby.


How is YouTube even going to know?


Self reporting most likely.


There’s no incentive for users to self report that. Especially the ones that will start bulk uploading content for views.


Will this be applied retroactively to most Hollywood output since Jurassic Park?


YouTube is the last high-quality web 2.0 service for me, until I steer away from my subscriptions.

If I wander a few videos too far away I start seeing videos about reptilians controlling the world on channels with 8M subscribers and they show "footage of people shapeshifting caught on camera", completely fabricated alternative history, flat earthers uncovering the grand conspiracy of the globalist etc.

I don't know how AI is worse than this. Also, apparently it's based on the creator disclosing the use of AI in the creation process. I guess the only "authentic" videos will be those of shape shifting reptilians and proof videos that US never landed on the moon. Kind of pointless.


I watch about 20-30m per day of youtube (not counting listening to videos to put me to sleep) and I've never encountered this. The worst I have seen is cheap knockoff channels with text-to-speech bot voices, which I suspect are just reading pirated textbooks to stock footage. I dislike people doing that, but there was nothing dubious about the content itself.

One thing I'm very cautious about is clicking on rage-bait links from other people. My best friend likes watching dramatic cop-encounters and shares them with me, but I'm always hesitant to click because I'm afraid it will start suggesting them without ability to stop. I can't say if this will work for you, but I aggressively use the "don't show me this" and "tell us why: I don't like the video". I rarely use the "don't recommend channel" (but wouldn't hesitate to use it if I ever got suggested pseudoscience or fake-news bs).


" but I'm always hesitant to click because I'm afraid it will start suggesting them without ability to stop."

The only workaround I know of, is using incoknito tabs/new profiles/different account. It should be supported out of the box, to watch a video with the option to not include that into your profile.

(my account is messed up beyound hope, for not doing this, I can only start a new one, but I don't use yt that much anyway)


If you could pick a curated video catalog, where a human editorial team accepted or declined new video submissions at upload time, and a human team decided what videos ought to be featured / amplified more broadly to various user clusters, would you prefer that to an open system where anyone could upload and ML decided that some users enjoy seeing reptilian conspiracies but a particular group of moderators/curators weren't the arbitrators of truth and taste and moral purity?

(not a loaded question, and it is possible different companies could emerge that would compete on the basis of their taste/curation/moderation policies. also equally possible it would be too costly/ unprofitable for the market to bear many smaller competing entities).

I think perhaps there's a third option but we just haven't really defined and figured it out yet. Some mixture of crowdsourced taste/moderation plus top down taste/moderation plus unfiltered UGC. Twitter's new user-generated Community Notes might be a good example of a step in a new direction. Social media is still relatively new.


> arbitrators of truth and taste and moral purity

I have not once looked at TV network executives, publishing house editors, museum curators, retail inventory buyers, librarians, gallerists, or magazine editors as any of those things. Why would somebody do so for video curation?

You don’t need some complicated top down bottom up crowdsourced ML blah blah blah. You just need to be able to contextualize content to the curator. Which is what people naturally do when there is an accountable curator.

Perhaps people who grew up recently only know content as endless troves of machine-curated feeds with no accountability or attributability, but that’s actually just a very ahistorical side effect of Section 230.

Every other way of experiencing media has always been through some curated context, with a specific entity you can point at as the responsible curator, and through which you color your experience of the content.

That’s not to say people couldn’t be misled in that model as well, but this whole “curators are the arbiter of truth” thing has no real or historical ground. Explicit curation actually offers the very opposite thing: recognizable, accountable, obvious context.


I think I’m happy with the current system, I’m anti-censorship and IMHO any content that doesn’t harm someone directly should never be deleted.

I am on the side of personal responsibility, that is, any content should be associated with its creator, and it should follow them. If someone posts completely ridiculous video, that video should affect their personal lives. If they change mind and apologize, that should be accepted too. That’s basically how real life relations work.

I just find the labeling as AI pointless.


I think the root of the problem is presuming that "engagement" is always good. Engagement can be for multiple reasons, and the problem will be solved as soon as we can figure out how to more qualitatively measure engagement.


Why does it have to be a centralized team?

I think the best answer will be some form of decentralized moderation. All we need is to put curators (users) in a web of trust, and have them cryptographically sign their opinions.


> If I wander a few videos too far away I start seeing videos about reptilians controlling the world on channels with 8M subscribers and they show "footage of people shapeshifting caught on camera", completely fabricated alternative history, flat earthers uncovering the grand conspiracy of the globalist etc.

If you click on that kind of shit, of course youtube is going to show you more. I don't know how people think youtube works but it seems like it should be obvious. If you click on nonsense "ironically" just to laugh at it, youtube doesn't know that and is going to show you more. Even if you dislike and "don't recommend" the video/channel, youtube doesn't know why you disliked it. Maybe you disliked it because the video said the aliens are from Venus but you know they're actually from Alpha Centauri. Youtube doesn't know why you disliked it, all they know is that you clicked on it in the first place, so they'll try to find more to show you.

All I click on is videos about ships, airplanes and trains. That's all youtube recommends to me. No wacky shit in my recommendations because I never click on anything even remotely wacky, not even just to laugh at it. Youtube is a mirror that reflects your viewing habits back at you. If it recommends trash it's because you watch trash. Btw I don't even use an account, just a cookie.


As ai gets better those people making “reptillian globalists” videos will be able to crank out at closer and closer to a dozen a day


> videos about reptilians controlling the world on channels with 8M subscribers

Link?


For example, a shapeshifter video: https://youtu.be/9UCLykdar_k?si=ScpO4zcT99K0Ryf_


I've always struggled with where people draw the line.

Video scripts have been near algorythmic for humans already. Does using chatgpt to make your script count as AI made? What if you gave it an outline, walked through it, and guided it to what you wanted - e.g. grammerly?

If you use machine learning to remove background noise, backgrounds in general?

Generated stock images/props/scenes for video essays?

If you made the entire video by hand but use text2speech?

I think that "the script, images, and voice were all chatGPT" is obviously "synthetic", but that's just the extreme. Humans have been using technology to augment their creation abilities forever.

My fear is a large amount of human made content will be called "synthetic" because some specific part used "AI" (which nowadays refers to literally any procedural/statistical/machine learning I guess?)


It's blurry and will only get blurrier.

Maybe there is no line to draw, and maybe that's OK.

This all feels so new I'm hesitant to form strong feelings, but I do think transparency while not required is appreciated. With knowledge, I can draw my own conclusion. Without it, I may end up feeling "duped."

Generally I want to continue enjoying human-created art. If something is "more" human-made than not, that's a factor I'll weigh as "better" and more authentic in my mind. But I'm also trending towards preferring indie dramas over CG-filled blockbusters, so I'm not pretending to be any sort of bellwether.


Yeah, it seems like the only "general solution" to this problem is to measure the number of "human work hours" that went into the production of a specific type of "content".

Content would then be labeled as "AI generated" if the "human work hours" was less than X hours (like 0.5 hours).

Whereas content would not be labelled as "AI generated" if the "human work hours" was greater than or equal to X hours (like 0.5 hours).

That would likely require sharing with YouTube not just the final work product, but rather all the iterative work product that led to the creation of the final work product (i.e., earlier drafts). When YouTube could calculate the number of "human work hours" based on the incremental creation/edit histories when analyzing all the drafts.

This is a very hard problem and unlikely to actually be solved in this way.


A percentage would be better.

>That would likely require sharing with YouTube not just the final work product, but rather all the iterative work product that led to the creation of the final work product (i.e., earlier drafts).

Not necessarily. YouTube generally trusts the word of the creator. If I upload a video and check the box that says "there is no swearing in this video" then YouTube more or less takes my word for it. They try to do some detection, but it has always been spotty.

I think YouTube would just ask the creator to label the video and that's it. If somebody is found to be in violation too many times then their account gets actioned.


So your solution is proof of work?


Why struggle with where other people draw the line? Draw the line yourself and don't concern yourself with the line other people draw for themselves.


The line is clear, deepfake stuff is supposed to be labeled. This is not a luddite moral panic measure that would label scripts.


I don't like any censorship... Including this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: