Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2026/01.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
| SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. | |
Stricter blocking policy for copyright violations
[edit]We still have a huge problem with blatant copyright violations where people upload some photo they found on the internet as their own work. Currently these users are warned multiple times and get blocked for two weeks. After the block they continue with uploading copyright violations. I think we should be more strict and only give one warning. If there are blatant copyright violations after the first warning they should be partially blocked infinitely from uploading files. They should be able to appeal immediately if they explain that they now understand the rules. This process is only valid for blatant copyright violations like wrong own work claims, "source: internet" or source links to obviously non free content. This should not apply to content where the uploaded assumed it would be public domain (also if it is not) or derivative work and FOP problems. GPSLeo (talk) 12:36, 26 October 2025 (UTC)
- simply
Strong oppose. modern_primat ඞඞඞ ----TALK 19:43, 26 October 2025 (UTC) - then put more admins here to cope with workship. modern_primat ඞඞඞ ----TALK 19:44, 26 October 2025 (UTC)
- + if people get banned then sock puppetery will thrive. current system of blocking for copyvio is good. modern_primat ඞඞඞ ----TALK 17:45, 27 October 2025 (UTC)
- I think the people who upload copyright violations in that way are not the same people who create sockpuppets to continue with their project disruption. The users who are to be blocked from uploading with this rule do not know what Wikimedia is. GPSLeo (talk) 19:16, 27 October 2025 (UTC)
- we'll see. hayırlısı olsun. modern_primat ඞඞඞ ----TALK 05:43, 28 October 2025 (UTC)
- I think the people who upload copyright violations in that way are not the same people who create sockpuppets to continue with their project disruption. The users who are to be blocked from uploading with this rule do not know what Wikimedia is. GPSLeo (talk) 19:16, 27 October 2025 (UTC)
- + if people get banned then sock puppetery will thrive. current system of blocking for copyvio is good. modern_primat ඞඞඞ ----TALK 17:45, 27 October 2025 (UTC)
Strong support, thanks to GPSLeo for pointing this out.--Kadı Message 23:22, 26 October 2025 (UTC)
Support, having reported overlooked mass copyvios in the past where the uploader turns out to have a talk page of warnings and earlier short blocks that they've waited out, not noticed, or not understood.- The
time likely needed for the user to familiarize themselves with relevant policies and adjust their behaviour
of an initial week-long block makes sense in nuanced situations where a user might be applying a policy wrongly or interacting inappropriately, while being an otherwise productive Commons user, but it doesn't seem applicable to someone who is claiming "own work" or "public domain" on clearly copyrighted content, and making no constructive contributions. Belbury (talk) 10:09, 27 October 2025 (UTC)
Support - in my opinion, copyright violations are, at large, one of the most prominent dangers (if not outright number 1) to the integrity of the project. Strong signals like blocks (and knowing + applying en:WP:UBCHEAP for people appealing them) are sensible tools to deal with such uploads. Regards, Grand-Duc (talk) 21:15, 27 October 2025 (UTC)
Strong support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:06, 28 October 2025 (UTC)
Comment Is it possible to partially block a user to prevent them from uploading new files, while still allowing them to edit file descriptions? If so, that seems like it could be a really useful tool to get users to fix licensing information on their uploads before letting them upload more files. Omphalographer (talk) 22:15, 28 October 2025 (UTC)
- This is exactly what I propose: Only block them from uploading. This is possible, they still can edit in the file namespace but they can not upload new files. GPSLeo (talk) 22:54, 28 October 2025 (UTC)
- is there a reason why this isnt the norm already Trade (talk) 01:29, 3 November 2025 (UTC)
- This is exactly what I propose: Only block them from uploading. This is possible, they still can edit in the file namespace but they can not upload new files. GPSLeo (talk) 22:54, 28 October 2025 (UTC)
- I started out wanting to weak oppose this, since my own start to this community was uploading copyvios. But when I got a "stop or you'll be blocked"-type message, I actually started to respond on my talk page in order to get a better understanding. When I began writing to oppose a stricter blocking policy, it was because I wouldn’t want to block the next me.
- I think every copyvio warning should automatically count as a clear "stop" notice – something unambiguous that the user can't miss amongst all the other boxes that must have already been left from previous deletion warnings (since that is why they are now getting a "stop" warning). But after writing this, I realized that what I was going to propose is almost the same as what's already being proposed here, so
Support. - However, I'd also like to propose a wider overhaul of how we notify users about copyright violations. Compare it to how vandalism warnings work on enwiki: short, conversational messages that invite the user to reply and ask questions, rather than long templates stacked with dense policy text. It's important information, and everything is super-important to know, follow, repeat in their sleep...(/joking aside...) The current system, with multiple boxes repeating similar warnings, doesn't really encourage dialogue or learning. A simplified, human approach could help new users understand why their upload was a problem and how to do better, rather than just overwhelming them with boxes upon boxes, a "stop" box in the middle followed by more boxes. --Jonatan Svensson Glad (talk) 23:07, 28 October 2025 (UTC)
- Supporting it in principle or in spirit but only for actually severe cases like people only or nearly only uploading copyright violations and continuing with it after the block expired. However, I think this can easily backfire and be misinterpreted if not codified well such as blocking users who actually uploaded a copyright violation once in a while and got warned and then accidentally did it again, e.g. because they forgot months later or didn't notice. So I think the phrasing should be well thought through, quite specific and well-written where "for blatant copyright violations like wrong own work claims, "source: internet" or source links to obviously non free content" is not yet sufficient.
- Prototyperspective (talk) 09:22, 29 October 2025 (UTC)
- +1. Some copyright rules are also complicated or simply not known by new users (many users don't work with copyright or law in their free time). Some just want to contribute some pics in good faith, so we should draw the line between people who don't want to care about the rules and people who are eager to do better and are or become useful contributors and want to learn about the issues to avoid them in the future. When I began, I also had some violations because knowing all rules before beginning is almost not possible, I guess. But for obvious cases, I can follow the proposal --PantheraLeo1359531 😺 (talk) 08:47, 15 December 2025 (UTC)
- If the sources to non free content are so obvious then why cant a filter pick them up Trade (talk) 01:27, 3 November 2025 (UTC)
Support, the partial block should be such that compels the user to respond to the talk page notice. Maybe we should broaden the partial block scope and include File space as a whole. Bcoz if it is a vandal, the second stop (after not being able to upload) will always be files. I agree with @GPSLeo that continued upload of blatant copyvios after first warning should attract a partial indefinite and not temporary block. I also like @Josve05a's proposals regarding TP notices. Thank you. Shaan SenguptaTalk 07:39, 3 November 2025 (UTC)
- I think any form of intentional dishonesty (e.g. editing an image to hide its source) should be an indefinite ban. Traumnovelle (talk) 06:02, 7 November 2025 (UTC)
- Agreed. But I'm much more skeptical of the value of blocking a new user over what may be an honest mistake or two. - Jmabel ! talk 23:48, 7 November 2025 (UTC)
- What do you mean with editing an image to hide it's source? Trade (talk) 20:56, 9 November 2025 (UTC)
- A typical example would be taking a stock photo, deliberately editing out a watermark, and claiming it as own work. Omphalographer (talk) 21:54, 9 November 2025 (UTC)
- So if someone did the exact same thing but didn't bother to remove the watermark that would not count as intentional dishonesty in your opinion? Trade (talk) 23:35, 9 November 2025 (UTC)
- A typical example would be taking a stock photo, deliberately editing out a watermark, and claiming it as own work. Omphalographer (talk) 21:54, 9 November 2025 (UTC)
- What if the user name / user page / contribution history contains what looks like a copyright holder's name, even if the file is available on the internet? Would sysops be responsible for checking those indications before immediately indef-blocking? Shoult it be "blocked until proven legitimate"? I think this "blatant" can be hard to define. A "Disney rep" uploading a recent Disney character might be hard to believe, while a lesser known company might be more willing to do a CC release for visibility. whym (talk) 06:22, 9 November 2025 (UTC)
- Rather
Oppose. I think this addresses an issue the wrong way. The problem is not supposedly lenient admins who let people upload copyright violations. The issue is the lack of people checking real serious copyright violations. Quite of number of people spend a lot of time looking for potential copyright violations in borderline cases, including URAA. But very few people take care of the serious cases (files copied from social media and random websites). In consequence, these copyright violations are not checked, and the violators can continue uploading them. If the priority is changed to serious violations, violators would be warned early, and blocked soon after. Yann (talk) 21:14, 9 November 2025 (UTC)
- The two issues are connected, though. If we're giving social media copyright violators four chances instead of one, the few users who are patrolling for that kind of copyright violation will have to find and verify and report them four times. That spreads their efforts even more thinly.
- There's also (as I understand it) no mechanism for anyone to be alerted that a user who's come out of a temporary block for blatant copyvios has started uploading files again, so it is just going to be luck whether anyone notices a copyvio-only account resuming activity, each time. Belbury (talk) 12:34, 20 November 2025 (UTC)
- For information, my practice is the following: less than 5 copyright violations -> simple warning, more than 5 copyright violations -> last warning. Then one-week block if still uploading after the last warning, and 3-months block if still uploading after one-week block. But I often discover users with 20+ copyright violations and no warning. So what do you suggest in such cases? Yann (talk) 16:26, 20 November 2025 (UTC)
- Oh, that's a point, I wasn't counting warnings among the "chances" above. So as things are, we're often giving blatant copyvio uploaders five or six opportunities to upload problematic content, including before the first and final warnings, and I think we should (as I think GPSLeo is suggesting) only ever give them two: one warning (at whatever level), then an indef block if they continue to upload copyvios. Belbury (talk) 16:49, 20 November 2025 (UTC)
- Is anyone stopping you from only giving one warning Trade (talk) 05:58, 21 November 2025 (UTC)
- I'll sometimes start with a final warning if the user has multiple batches of copyvio deletion notices on their talk page, at least weeks apart, but no first warning. Belbury (talk) 10:18, 21 November 2025 (UTC)
- So why do we need a proposal? Trade (talk) 22:59, 25 November 2025 (UTC)
- The proposal is about what level of block is appropriate after the user ignores that final warning. Belbury (talk) 09:34, 28 November 2025 (UTC)
- So why do we need a proposal? Trade (talk) 22:59, 25 November 2025 (UTC)
- I'll sometimes start with a final warning if the user has multiple batches of copyvio deletion notices on their talk page, at least weeks apart, but no first warning. Belbury (talk) 10:18, 21 November 2025 (UTC)
- Is anyone stopping you from only giving one warning Trade (talk) 05:58, 21 November 2025 (UTC)
- Oh, that's a point, I wasn't counting warnings among the "chances" above. So as things are, we're often giving blatant copyvio uploaders five or six opportunities to upload problematic content, including before the first and final warnings, and I think we should (as I think GPSLeo is suggesting) only ever give them two: one warning (at whatever level), then an indef block if they continue to upload copyvios. Belbury (talk) 16:49, 20 November 2025 (UTC)
- For information, my practice is the following: less than 5 copyright violations -> simple warning, more than 5 copyright violations -> last warning. Then one-week block if still uploading after the last warning, and 3-months block if still uploading after one-week block. But I often discover users with 20+ copyright violations and no warning. So what do you suggest in such cases? Yann (talk) 16:26, 20 November 2025 (UTC)
Weak oppose. I support a block after one warning, but I don't think the first block should be indefinite unless the uploader seems to be a vandal or spambot. On Wikivoyage, we normally use a system of escalating blocks, starting with 3 days, then 2 weeks, then 3 months, and then indefinite. However, I feel like admins are really overworked on Commons and struggling to catch up to problems, so I think the first block being a week and the second block being indefinite would be fine. -- Ikan Kekek (talk) 23:18, 16 November 2025 (UTC)
Support Reading the headline, my initial response was to oppose this idea outright. But GPSLeo makes a quite nuanced proposal that would only affect blatant copyviolators who continue their practice after the first warning; would not block these people entirely from the project; and would allow for an appeal procedure. IF the guardrails outlined in the proposal are followed, this would hopefully lead to less copyvio uploads in the long run, because notorious copyviolators can either be educated to do better or be stopped entirely in their tracks. --Enyavar (talk) 00:37, 28 November 2025 (UTC)
Oppose I appreciate what User:Jonatan Svensson Glad wrote, and my thinking goes in that direction, even though my "vote" is different. Jerimee (talk) 22:56, 28 November 2025 (UTC)
Support --ReneeWrites (talk) 23:14, 13 December 2025 (UTC)
Support --Ooligan (talk) 02:15, 14 December 2025 (UTC)
Neutral I understand the proposal and support it in some way, but I fear that the "threat" of sanctioning with blocking may come in some cases too soon (afaik a blocking should be not considered as punishment, but rather as demand to comply with the rules). I saw one case where a long-year contributor had only very few violations and acts in good faith, as I could examine, and he got a warning of being banned. This was way too exaggerated in my opinion, so I would like to limit the stricter policy to (new) users who obviously cause vandalism and do violations on purpose, but not apply stricter bans to users where problems are rare and where they are eager to do great. So, I am neutral on this --PantheraLeo1359531 😺 (talk) 08:41, 15 December 2025 (UTC)
Support Although this is de facto applied to problematic users. --Bedivere (talk) 04:00, 18 December 2025 (UTC)
Ratify Commons:AI images of identifiable people as a guideline
[edit]Following the discussion at Commons:Village_pump/Proposals/Archive/2025/09#Ban_AI_generated_or_edited_images_of_real_people, I prepared Commons:AI images of identifiable people.
I am now seeking to have it officially adopted as a guideline.
@GPSLeo, Josve05a, JayCubby, Dronebogus, Jmabel, Grand-Duc, Pi.1415926535, Túrelio, Raymond, Isderion, Smial, Adamant1, Infrogmation, Omphalographer, Bedivere, Masry1973, and Ooligan: I believe this is everyone that participated in the original discussion. Please feel free to ping anyone if I missed them.
Cheers, The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)
Support As proposer. The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)
Support As a 'canvas'-by-ping user. JayCubby (talk) 22:38, 7 December 2025 (UTC)
Support Pi.1415926535 (talk) 22:47, 7 December 2025 (UTC)
Support. Omphalographer (talk) 23:15, 7 December 2025 (UTC)
Support Abzeronow (talk) 23:29, 7 December 2025 (UTC)
Support Ooligan (talk) 00:01, 8 December 2025 (UTC)
Support Grand-Duc (talk) 00:54, 8 December 2025 (UTC)
Support --Bedivere (talk) 01:02, 8 December 2025 (UTC)
Support with one caveat "The image in question was published by the person it depicts" should be "The image in question was published by the person it depicts or with their documented permission or approval." - Jmabel ! talk 02:53, 8 December 2025 (UTC)
- @The Squirrel Conspiracy: would you have any objection to that small edit? - Jmabel ! talk 05:13, 8 December 2025 (UTC)
- In principle, I think it's fine. In practice, I'm not sure what exactly that looks like though. I'm loathe to have people submit "documented permission" to VRT, because a) they're often backlogged as it is, and b) there's this loop where someone uploads a file, then it gets deleted for permissions reasons, then it goes through VRT and is restored, then it gets deleted for scope reasons (because while VRT agents can decline tickets for scope reasons, it seems like a decent number of agents are uncomfortable doing so) - it's a tremendous waste of volunteer resources and I can see a lot of AI images getting stuck in that loop. @Krd, thoughts? The Squirrel Conspiracy (talk) 05:53, 8 December 2025 (UTC)
- I think the sort of situation Jmabel is trying to address is content published on behalf of a person by a social media manager or similar. For instance, if a political figure were to post an AI-generated image on their social media, we wouldn't necessarily know whether it was personally posted by the politician or by their PR team, but it should probably be considered allowed regardless. Omphalographer (talk) 06:01, 8 December 2025 (UTC)
- @Jmabel: could the word "auspice" or a wording like "published by or on behalf of the person it depicts" work (with a footnote explaining that "on behalf of" shall mean by a person like a social media manager)? Regards, Grand-Duc (talk) 06:41, 8 December 2025 (UTC)
- That's one of two cases I had in mind. The other is after-the-fact endorsement. E.g. (this has happened) someone publishes an AI-generated image of Trump, Trump re-tweets it (or whatever you call the equivalent on Truth Social). Also (likely, but no examples offhand), someone approvingly links in social media or on their own web page, etc. to an AI-generated image of themself.
- FWIW I wasn't thinking VRT at all. I'd hope that seldom, if ever, arises. - Jmabel ! talk 22:26, 8 December 2025 (UTC)
- Also a good point. There's a lot of different ways that content can be posted on social media these days - posting, reposting, embedding offsite media, etc. IMO, we should treat all of these cases identically for the purposes of this guideline. Omphalographer (talk) 00:27, 9 December 2025 (UTC)
- @Jmabel: could the word "auspice" or a wording like "published by or on behalf of the person it depicts" work (with a footnote explaining that "on behalf of" shall mean by a person like a social media manager)? Regards, Grand-Duc (talk) 06:41, 8 December 2025 (UTC)
- I think the sort of situation Jmabel is trying to address is content published on behalf of a person by a social media manager or similar. For instance, if a political figure were to post an AI-generated image on their social media, we wouldn't necessarily know whether it was personally posted by the politician or by their PR team, but it should probably be considered allowed regardless. Omphalographer (talk) 06:01, 8 December 2025 (UTC)
- In principle, I think it's fine. In practice, I'm not sure what exactly that looks like though. I'm loathe to have people submit "documented permission" to VRT, because a) they're often backlogged as it is, and b) there's this loop where someone uploads a file, then it gets deleted for permissions reasons, then it goes through VRT and is restored, then it gets deleted for scope reasons (because while VRT agents can decline tickets for scope reasons, it seems like a decent number of agents are uncomfortable doing so) - it's a tremendous waste of volunteer resources and I can see a lot of AI images getting stuck in that loop. @Krd, thoughts? The Squirrel Conspiracy (talk) 05:53, 8 December 2025 (UTC)
- @The Squirrel Conspiracy: would you have any objection to that small edit? - Jmabel ! talk 05:13, 8 December 2025 (UTC)
Neutral While I am against this as a policyguideline, the community has spoken. So, nothing against ratifying it, but I don't want to support it. --Jonatan Svensson Glad (talk) 03:51, 8 December 2025 (UTC)- well, the wording suggests it would be a policy anyway, disallowing some AI materials de facto (or de jure, depending on how you interpret it) Bedivere (talk) 04:48, 8 December 2025 (UTC)
- The community has previously spoken on another proposal, not this proposal. Now, the community is hopefully speaking about this new proposal which is different from the earlier one. Prototyperspective (talk) 17:12, 8 December 2025 (UTC)
- it's somewhat insulting to imply that participants are confused or unaware; they've simply reached conclusions different from yours. Bedivere (talk) 22:08, 8 December 2025 (UTC)
- Good that I didn't imply that then. Prototyperspective (talk) 10:19, 9 December 2025 (UTC)
- it's somewhat insulting to imply that participants are confused or unaware; they've simply reached conclusions different from yours. Bedivere (talk) 22:08, 8 December 2025 (UTC)
Support Raymond (talk) 07:44, 8 December 2025 (UTC)
Support. --Túrelio (talk) 07:58, 8 December 2025 (UTC)
Support, looks good. --Belbury (talk) 08:45, 8 December 2025 (UTC)
Support GPSLeo (talk) 09:29, 8 December 2025 (UTC)
Support --Smial (talk) 11:41, 8 December 2025 (UTC)
Strong oppose The original proposal had AI generated photos where the description states that the photo shows an actual person are not allowed
but this new proposal now has the much more restrictiveImages of identifiable people created by AI are not allowed on Commons unless at least one of the following criteria are met [posted by the person or reliable sources cover it]
. I don't know if the voters here all know about this. I think it should be changed. There are two main issues:
- Example File:King Tutankhamun brought to life using AI.gif (display was disabled)
- Information graphics and art such as caricatures relating to public officials such as an information graphic or artwork pointing out problems of Trump behavior, claims & policies.
- It doesn't seem to exclude identifiable historic people. AI images can often make sense, especially when there is nearly no or no free media available of the person. An example is on the right.
- I think the votes were done hastily without proper deliberation and without consideration of potential uses. A policy this indiscriminate and restrictive additionally seems to violate existing policies COM:SCOPE, COM:INUSE and COM:NOTCENSORED. A constructive approach would be to edit the proposed policy but I would probably still tend toward oppose because I see no need for this – we should strive to stay as unbiased and uncensored as possible and delete files based on whether that is due per set/case. People could introduce more and more restrictions and soon you'll find yourself in a situation where you can't even upload an image critical of Trump anymore per policy (and with wider adoption of AI tools by society, this is what this policy will already achieve to a large extent).
- Prototyperspective (talk) 15:29, 8 December 2025 (UTC)
- It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
- Regarding the first point: "The image in question is the subject of non-trivial coverage by reliable sources" already covers the use case of "caricatures relating to public officials". The series of images that File:Trump’s arrest (2).jpg belongs to, for example, are permissible under this guideline. This guideline would not permit a random user's AI image caricature of Trump, but even without this guideline, it would be deleted as personal art.
- Regarding the second point: "It doesn't seem to exclude identifiable historic people.", that is working as designed. If it's a notable depiction, it'll be covered by "non-trivial coverage by reliable sources". If it's a random user's AI image of a historic figure, even without this guideline, it would be deleted as personal art. Keep in mind that the image you posted does not depict King Tut. It depicts what a probability engine thinks the promoter is looking for - a young boy with Arabic features in pharaoh attire. It has no way of knowing if any of what it did is accurate. This is why some projects have already banned most AI images.
- The Squirrel Conspiracy (talk) 16:45, 8 December 2025 (UTC)
- sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)
- Nothing about is disgusting.
why it is necessary to have these all (non notable) depictions
ok: so why? Prototyperspective (talk) 17:05, 8 December 2025 (UTC)- They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)
- So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)
- in that case, the key would be that the recreation would most likely be a human creation or representation, not something created by an algorithm. Bedivere (talk) 00:57, 9 December 2025 (UTC)
- So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)
- They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)
- Nothing about is disgusting.
to assume that…
I didn't do so if you read my comment. This is a false statement.already covers the use case of "caricatures relating to public officials"
No, it doesn't. It means caricatures and critical works are reserved to the privileged few who got reported on in major publications. What chaos if we'd allow common citizens to release critical art and information graphics right?it would be deleted as personal art.
No, it wouldn't (necessarily). It depends on how educational/useful it is.a young boy with Arabic features in pharaoh attire
Exactly, and such things can be useful and interesting, especially if engineered to closely match data about the given person.no way of knowing if any of what it did is accurate
not the AI but the prompter. Prototyperspective (talk) 17:11, 8 December 2025 (UTC)
- sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)
- It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
Support, with the addendum that publications on behalf of someone should also be permitted. --Carnildo (talk) 23:15, 8 December 2025 (UTC)
Support Infrogmation of New Orleans (talk) 01:21, 9 December 2025 (UTC)
Support the proposal and also
Support whacking User:Prototyperspective with a wet trout Apocheir (talk) 04:08, 9 December 2025 (UTC)
- Re trout, if I made an error point out which by addressing it (ideally refuting it).
Why do educational documentaries use fictional depictions of historical people if such can't be educationally useful? These are banned by this proposal as well. I always support truly considering and addressing points raised in every kind of community decision-making, especially when it's volunteers.- Another point I didn't mention earlier, the policy rationalizes itself with
When dealing with photographs of people, we are required to consider the legal and moral rights of the subject […] Commons has long held that files that pose such legal or moral concerns
but why would not apply to paintings or nonAI digital art of identifiable people? And does this really apply to neutral depictions of ancient historical people? There is no need for this policy considering the very low number of of such files Commons currently has.
- Another point I didn't mention earlier, the policy rationalizes itself with
- Prototyperspective (talk) 10:24, 9 December 2025 (UTC)
- Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)
Personal art about notable people was always not allowed as being out of scope
False. Personal art by non-contributors is speedily deleted so this is an additional reason for why there is no need for this proposed policy. Other than that, I don't know of such a policy, especially not one that clarifies what is meant with "Personal art".Now with the AI tools available there are much more of such uploads.
Arguably false. There aren't many – currently just 99 in the cat. That's the number of files uploaded every ? two minutes maybe?- Moreover, a significant fraction of them are COM:INUSE, underlining that these files can be useful also on Wikimedia projects despite that the ones we have are not close to what is possible with these tools in terms of quality (and accuracy if data on appearance is available) but Commons isn't just there for only wikiprojects but also for e.g. documentary makers who often show fictional imagery of historical people (as stated earlier and which I could prove by linking to several such documentaries with the example timestamps).
To avoid long discussions and case by case decisions, we need this new stricter guideline
For personal art by non-contributors and hoaxes, files can already be speedily deleted without discussion. For files that are of low-quality or not useful, there generally are no lengthy discussions. Enabling users to discuss whether a file should be deleted is a point of COM:NOTCENSORED which this proposed policy would as far as I can see invalidate in terms of its current title/proposition. There are a lot of things where one may prefer to not enable discussion. I still see no need for a stricter guideline.
- Prototyperspective (talk) 11:41, 12 December 2025 (UTC)
- Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)
- Re trout, if I made an error point out which by addressing it (ideally refuting it).
Oppose The page refers to "legal and moral" rights as a justification but doesn't cover cases where the legal and moral rights are expired. If there's another good reason to exclude pictures of, say Cleopatra or Genghis Khan, the policy needs to spell it out. -Nard (Hablemonos) (Let's talk) 17:27, 11 December 2025 (UTC)
- Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)
used to show how this person looked like
This is not the only use-case of such imagery. An example I made is a documentary film video about say Ancient Egypt and I noted I could provide evidence that such documentaries usually do include fictional imagery of historical people.is against good journalistic standards
Commons is not censored based on proposed "journalistic standards". Prototyperspective (talk) 16:12, 15 December 2025 (UTC)- I think the point is that living people have certain rights that dead people cannot have, and this proposal's main justification lies there. Editorial standards seem to be secondary to the proposal. whym (talk) 23:41, 5 January 2026 (UTC)
- Editorial standards are not moral rights; they're standards used by a certain organization. I see no evidence that journalistic standards exclude the use of tools to show how someone might have looked like. Wikipedia certainly uses much worse, random images produced by people who had no idea how the person may have looked, but by paint and not computers.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)
- FWIW, those have a certain value in terms of showing how someone was perceived in a given era. For example, all images of biblical figures are from people who had never seen them (unless we count visionaries as actual witnesses). A painting of Jesus by a notable artist has an historical significance that an AI image of Jesus does not, though it would be purely coincidental for either to be a good likeness. - Jmabel ! talk 03:47, 8 January 2026 (UTC)
- Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)
Support --ReneeWrites (talk) 23:11, 13 December 2025 (UTC)
Strong oppose No reason provided why this is needed when Commons:Scope already exists. --Trade (talk) 15:59, 15 December 2025 (UTC)
- @Trade I assume you mean
Support, otherwise the context is not clear for us :) --PantheraLeo1359531 😺 (talk) 16:03, 15 December 2025 (UTC)
- It might be a reaction to my deletion decision in Commons:Deletion requests/File:GPT-4o Studio Ghibli portrait of Barack Obama.png. Abzeronow (talk) 02:00, 16 December 2025 (UTC)
- "we should not have it because i dont want it" is not a very compelling argument Trade (talk) 16:35, 16 December 2025 (UTC)
- I didn't feel like posting a whole treatise for a DR close on how that AI portrait would likely violate the principles of en:WP:BLP and Obama's moral rights as well as the fact that an AI portrait is not an accurate representation of a person, and there is no educational reason why we'd need a Ghibli-style (which essentially violates the copyrights of Studio Ghibli btw) portrait of Obama when we have plenty of portraits of Obama that are educationally useful. Abzeronow (talk) 00:09, 17 December 2025 (UTC)
- "we should not have it because i dont want it" is not a very compelling argument Trade (talk) 16:35, 16 December 2025 (UTC)
- It might be a reaction to my deletion decision in Commons:Deletion requests/File:GPT-4o Studio Ghibli portrait of Barack Obama.png. Abzeronow (talk) 02:00, 16 December 2025 (UTC)
- @Trade I assume you mean
Oppose for its treatment of dead, especially long-dead, people. AI of living people is problematic. AI pictures of King Tut are not. That rule is far too much in telling the other projects that depend on us what they may use as illustrations.--Prosfilaes (talk) 07:13, 17 December 2025 (UTC)
- @Prosfilaes: what would you think of a rule about some number of years after death? - Jmabel ! talk 19:17, 17 December 2025 (UTC)
- I personally am not interested in diluting the policy for one person's objection when 18 people have already approved it as is. The Squirrel Conspiracy (talk) 23:52, 17 December 2025 (UTC)
- It is not one person. Moreover, things aren't just about the relative number of votes but also about the content of what people have written. Wikipedia for example has a policy about that, en:WP:NODEMOCRACY.
No reason has been given so far for why Commons should censor/disallow/entirely-delete images of the mentioned type in apparent tension and/or contradiction with other policies – namely at least COM:SCOPE and COM:NOTCENSORED – and with so far unclear need for it (implied also by there being no stated reason). Prototyperspective (talk) 00:06, 18 December 2025 (UTC)
- It is not one person. Moreover, things aren't just about the relative number of votes but also about the content of what people have written. Wikipedia for example has a policy about that, en:WP:NODEMOCRACY.
- Sure. Life+50 or life+70 are nice round numbers, and we should generally be able to find photographic evidence of anyone within that range. There are other people who have made similar objections, and such objections don't lead to good consensus.--Prosfilaes (talk) 02:02, 18 December 2025 (UTC)
- I personally am not interested in diluting the policy for one person's objection when 18 people have already approved it as is. The Squirrel Conspiracy (talk) 23:52, 17 December 2025 (UTC)
- @Prosfilaes: what would you think of a rule about some number of years after death? - Jmabel ! talk 19:17, 17 December 2025 (UTC)
Support with Jmabel's caveat. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:54, 18 December 2025 (UTC)
Strong support. I don't think we should be hosting deepfakes of any kind, to prevent the spread of misinformation, respect towards the person being depicted among many other ethical and social considerations. It's moon (talk) 03:35, 27 December 2025 (UTC) – Edited on 07:21, 29 December 2025 (UTC)
Support Surely one benefit of this guideline is that it will deter those who attempt to get around copyright violation by using AI-generated portrait. However, considering that the Commons version may differ from or even conflict with those of other communities, images that do not comply with this guideline should also be excluded from COM:INUSE rules. 0x0a (talk) 17:51, 28 December 2025 (UTC)
- @0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)
- Um, I kinda believe INUSE also needs to be updated accordingly, so I opened a new discussion at
- 👉︎ Commons_talk:Project_scope#Proposed_change:_excluding_images_do_not_comply_with_COM:AIP_from_COM:INUSE_rules -- 0x0a (talk) 10:12, 29 December 2025 (UTC)
- @0x0a: I disagree. Part of the point of the guideline is to not use deepfakes and unaccurate representations of identifiable people not just on Commons but across all Wikimedia projects. Therefore all images that don't meet with the proposed guideline should, in my opinion, get deleted once the guideline gets ratified regardless of wether they are currently in use on other projects or not (with perhaps the only exceptions being images that get used to illustrate the concept of deepfake or similar itself → and even in those cases, they should probably still have been published by the person they depict). It's moon (talk) 10:58, 29 December 2025 (UTC) – Edited on 12:13, 29 December 2025 (UTC)
- Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)
- Whoops, I misread INUSE, I thought you were saying that images used on other projects should be kept, which I disagreed on, but I am realizing you were saying they should get deleted, so turns out we both agree. It's moon (talk) 16:05, 29 December 2025 (UTC)
- Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)
- @0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)
- I think the oppose votes, even if they are the minority, raise valid points about living people and long-dead people. I’d suggest focusing on living people (and perhaps the recently deceased) for now. This is not to say anything goes for images of the dead, it would just be left undertermined in the meantime. I think that a narrower focus would allow us to ratify some important and non-controversial part of the proposal quickly with a broader support. We can continue working on the rest and additively revise the policy after that. whym (talk) 11:38, 5 January 2026 (UTC)
Oppose This seems over thought. Take the bit that's important, tweak it, and add it to COM:PIP. AI images of identifiable people are not allowed on Commons unless they have been published with the subject's permission or the image itself is the subject of significant public commentary in reputable sources.
There's no need to rehash a moral framework, define what a person is, or legislate interactions with overarching standards like SCOPE or DW. There's no need to add technical issues related to things like upscaling. Wherever that needs to go, it's not specific to identifiable people. There's no need to trying to define a boundary between substantially AI edited or AI generated. No need to get into what counts as a good source.The operative bit above sets the standard and people can sort out the finer details in vivo. GMGtalk 14:18, 5 January 2026 (UTC)
Comment Regarding AI images of long-dead people, while not necessarily problematic when it comes to legal and moral rights of the subjects, there are other factors that make these images unsuitable for an educative project like Commons. The example of Tutankhamun illustrates this perfectly. We have multiple forensic studies that reconstruct Tutankhamun's appearance based on the actual structure of his skull and mummy (see [1], [2], [3], [4], [5], [6]). However, files such as File:King Tutankhamun brought to life using AI.gif are problematic because they are historically inaccurate, overly idealized misrepresentations. This just comes to further show how Generative AI can and will make false assumptions about historical subjects and introduce misinformation. It's moon (talk) 14:50, 5 January 2026 (UTC)
- What if a Wikibooks chapter wants to discuss misinformation using AI-generated Tutankhamun images as illustrations? whym (talk) 23:38, 5 January 2026 (UTC)
- I had seen that study before my post with that gif earlier FYI and I'm well aware of scientific facial reconstruction.
- First of all you're making the false assumption that the educational function of media showing ancient people is primarily or even only to educate people on how exactly precisely the given people looked like. That is not necessarily the case, probably not even usually. If I wanted to make an educational podcast video about King Tutankhamun talking about historical facts and the peculiarity of his young age, it would be more interesting if it had some visuals. Such an animation even if not accurate to the most precise tiniest of detail would help the listener to visualize and better imagine what is being talked about plus it makes them take up more information as the content is not dull and boring but exciting. An example here is the Fall of Civilizations podcast that I sometimes enjoy listening to. It also has some visuals to it on YouTube – do you think it's accurate to the last detail? Example Ep 18 Fall of the Pharaohs (1.1 M views) such as its depiction of Ramesses. (Btw I made some educational podcast in the past and went to Commons to find free media to use which was often so gappy that I had to first upload relevant media to here from elsewhere and see how AI media can be useful for podcast&documentary-making sometimes depending on various factors such as how it's contextualized etc.)
- It depends on how the file is used. If it's used in a Wikipedia article where the text implies or the caption says basically 'This is how Tutankhamun exactly looked like' then it's problematic. But the problem there is how it's used, not that it's on Commons.
- The gif actually looks quite similar the scientific reconstruction. Maybe you think it's of utmost importance that even the tiniest of facial details is exactly accurate in any depiction and everything else is "misinformation". But that's not what matters to many or in many contexts, such as when the media is not contextualized as to be a very realistic restoration and the subject is just e.g. the young age of Tutankhamun. Moreover, most paintings, especially historic and ancient ones are very inaccurate.
- The question is not whether there are studies that reconstruct a given person's face – and for most notable long-dead people there aren't any – but whether the media is on Commons / free-licensed. There's basically one person (big thanks to him) who creates (static) restorations of notable people – ~150 files in Category:Works by Cícero Moraes – and sometimes (probably fewer than these) some free-licensed image in some study or elsewhere to import. For many notable subjects there aren't media. Key here is that just because a file is on Commons, doesn't mean it has or needs to be used. Lastly, AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
- Prototyperspective (talk) 00:22, 6 January 2026 (UTC)
- @Prototyperspective: I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. I as supporter certainly do.
- I see the current situations as: "Upload first, ask later", and without robust tools to have a redactional overview over AI generated imagery. It's kind of similar to "shall issue states" in the US in regard to firearm laws and concealed carry. I think that most supporters are advocating for the alternative of "Ask yourself first if AI is useful, then if yes, upload", the default being "Don't upload" (or delete by due process if uploaded anyway). Such a mindset in regard to AI slop and AI generated imagery in general would be a robust tool for the needed curating. To return to the concealed carry example: we should switch from a "shall issue" to a "may issue" style of permit. This implies that, of course, an AI generated Tutankhamun image with a demonstrated solid use case (like the Wikibooks thing above your post), can, may stay. I'm advocating for that such AI imagery imperatively needs a worked-out context in its description (prompt, use case, ideally the sources) besides the demonstrated need of actual use somewhere; otherwise it's liable to get deleted.
- Lastly, you wrote
AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
As it stands now, the tools available to the general public (ChatGPT, DALL-E, Stable Diffusion...) are built in a way to generate eye candy (as you wrote on the German Forum, I could also refer de:Klickibunti), not scientifically sound media, as that is likely expected by the general public, by their users. Some software that is specifically made for scientific reproductions (like forensic face generation, digitally aging or similar) won't be within the purview of this policy. Regards, Grand-Duc (talk) 18:22, 6 January 2026 (UTC)- Reasonable point but I disagree: there is no flood of AI imagery and this proposed policy probably won't be much of an help with this nonproblem if it was a problem and it's redundant due to policies COM:SCOPE and COM:DIGNITY while in direct contradiction with COM:NOTCENSORED and, as explained above, COM:SCOPE where the minor potential benefits are not worth the inconsistency and problems that come with this proposed policy. People can already nominate any such or many such files at once for deletion.
- The Tutankhamun animation has two educational use-cases I can readily think of and we shouldn't assume we can and need to be able to readily think of potential use-cases:
1. as part of some video or page about Tutankhamun where the animation is not contextualized as to being precise to the last facial wrinkle but just some rough AI visualization eg showing his young age 2. as an illustration of how AI tools can be used to visualize people such as ancient people in moving (nonstatic) format (that is even if some say the quality is low). are built in a way to generate eye candy
I know they are not built for what I described to be easy. That doesn't mean they can't be used for that. People could for example learn about this use-case and the current issues with it and adjust these tools or use them in sophisticated ways to create better-quality results of that type.Some software that is specifically made for scientific reproductions
I'm not talking about other software though. The current models can already be used for this. It's just not easy. Many people think using AI tools is always easy but it isn't – just the way maybe most people use them is simple but some people use them in more sophisticated ways that need a lot of skill and expertise. I outlined roughly how these tools, including just standard Stable Diffusion etc, can be used for reproductions of scientific accuracy and you seem to have overread or ignored that. This can already be done, I'm just not skilled enough with these tools plus also not motivated enough to spend my time and effort on it to prove it to you right now. My prior low-effort uploads relating to this are more about (enabling) communicating the concept and idea – this again can lead to people working on fleshing out this application for higher-quality results via adjusting or building tools and developing workflows. But again, not for every application does each facial detail matter such as for the podcast linked above where also at least one ancient person is depicted without scientific precision level accuracy (btw typo it has 11 M views, not 1.1 M). Prototyperspective (talk) 19:19, 6 January 2026 (UTC)- You repeatedly claim that editors overread or fail to deliberate whenever they disagree with your views ([1], [2], [3]).
- My stance is that we need to build policies based on how AI is currently being used, not how it could or may theoretically be used. I'm not against changing the policy later down the line if we see a change in AI accuracy or a tendency to a more responsible usage, but for now we have to address the current reality. It's moon (talk) 21:20, 6 January 2026 (UTC)
- Your claims are ad hominem argumentation, and I will not stand for them. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 21:42, 6 January 2026 (UTC)
- @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)
- @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)
- Understood, thanks. It's moon (talk) 23:05, 6 January 2026 (UTC)
- @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)
- Absurd claim; if you ignore all I said in my comment imo it's better to not comment at all. Prototyperspective (talk) 22:54, 6 January 2026 (UTC)
- @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)
- I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)
- @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)
- This is not an insult. It was a rational point that your comment did not address nor relate to anything I wrote (where btw imo a constructive rational response would be to prove me wrong by pointing to the specific text segment to which your comment does relate if there was any but there isn't any "ad homininem" in there, let alone is it all just that). With "ignore" I meant you didn't address any of it which is of course one can do but I'm also free to point it out even if you disagree with that assessment. Prototyperspective (talk) 23:44, 6 January 2026 (UTC)
- @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)
- I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)
- @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)
- @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)
- Re.
there is no flood of AI imagery
- my experience speaks otherwise. I've seen a ton of clearly AI-generated images uploaded to Commons, including a substantial number of AI-generated or heavily AI-retouched images of people. Omphalographer (talk) 21:51, 6 January 2026 (UTC)- How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)
- The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)
- Good point but it's not a small fraction in my experience (of for over a year regularly tracking all new AI uploads and categorizing probably more than half of AI-related files) but maybe something around as many as are still on Commons.
- If one makes a comparatively large effort to delete low-quality AI media, then it can seem as if it's a flood but there's days where not even one AI image got uploaded and people I think aren't doing a comparable effort to find and delete low-quality drawings and low-resolution-mundane photos. I think we just keep disagreeing on that point but it's not central to my arguments above – especially so since you also say these files are already even speedily-deleted so this new policy is not needed, especially not in this indiscriminate/harsh+unjustified shape. Prototyperspective (talk) 23:38, 6 January 2026 (UTC)
- Re.
there's days where not even one AI image got uploaded
- not recently! There are typically somewhere on the order of 50 to 100 AI-generated images uploaded every day. Omphalographer (talk) 21:28, 9 January 2026 (UTC)
- Re.
- The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)
- How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)
- I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. Er, what? No, we don't use policy that says these things are "not allowed" and then argue it's fine because it's not an absolute prohibition. Policy should say exactly what it means; laws saying that X is not allowed and people in the know getting the wink and nod from people also in the know is a good way to piss off users.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)
Support Without having read this whole discussion, I've looked at the proposed guideline as it stands today, and I agree with the proposal. It is quite restrictive, but I think we need to be restrictive handling such AI-generated images. We should always be extremely cautious and only allow a selection of such images where there is a very good reason for each individual image to host it at all. Gestumblindi (talk) 09:53, 6 January 2026 (UTC)
- One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)
- @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)
- Yeah, it seems like there is a territory specific component to be considered regarding the living vs dead issue.
- The current proposal's main justification, as it is written, seems to be the moral rights of the people depicted, though. (It's in the first paragraphs.) If there are other, more important rationales, I think the proposal needs to be revised to more clearly include them and argue based on them. Without such (major) revision, I think it would make a more solid argument if we stick with living people within this iteration. whym (talk) 01:20, 11 January 2026 (UTC)
- @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)
- One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)
Support Strakhov (talk) 18:27, 6 January 2026 (UTC)
Support Ternera (talk) 14:02, 7 January 2026 (UTC)
Linking "Categories" to Commons:Categories instead of Special:Categories
[edit]
Next to all files' categories, there is a link to Special:Categories. However, that page is probably confusing to new users, as far as I can see not really helpful or useful. The image on the right explains it.
The page does start with The following categories exist on Commons. * For an introduction, see Commons:Categories
but many people probably over-read it (it's short text at the very top and the wikilink list is what's catching the eye) or don't click a further link. People often aren't that interested to navigate around by clicking yet another link. Probably, the page-views of this page are higher or comparable to the Commons page and reads of the latter would increase significantly if it was directly linked.
Web search engines would probably also better show the help page if it was linked on all file pages instead of that special page. I don't know of any application where browsing all existing categories alphabetically would be useful – I'm sure there are some but think this page would better be linked from Commons:Categories and that page be linked in the panel, instead of the other way around.
What do you think – would it be good to replace this link in the categories panel of file pages to the policy & guideline page Special:MyLanguage/Commons:Categories?
Prototyperspective (talk) 16:04, 11 December 2025 (UTC)
Support Never noticed, but this seems logical. --Schlurcher (talk) 08:49, 12 December 2025 (UTC)
Support Does anyone know how to change this? Is this link built into core MediaWiki? GPSLeo (talk) 11:30, 12 December 2025 (UTC)
Support I don't know how we change it, either, but I imagine we can work that out once we have consensus to do so. - Jmabel ! talk 20:11, 12 December 2025 (UTC)- It looks like what's to be changed is MediaWiki:Pagecategorieslink. Special:MyLanguage/Commons:Categories might be a better link. whym (talk) 01:28, 13 December 2025 (UTC)
- @Whym: agreed. @Schlurcher, GPSLeo, and Prototyperspective: do you also agree, so we can modify the proposal to accommodate language? - Jmabel ! talk 03:01, 13 December 2025 (UTC)
- Agree of course and edited the two wikilinks above. --Prototyperspective (talk) 11:31, 13 December 2025 (UTC)
- Agree. If we simply have to change MediaWiki:Pagecategorieslink that would almost be too easy. --Schlurcher (talk) 14:53, 13 December 2025 (UTC)
- @Whym: agreed. @Schlurcher, GPSLeo, and Prototyperspective: do you also agree, so we can modify the proposal to accommodate language? - Jmabel ! talk 03:01, 13 December 2025 (UTC)
Support changing the link per discussion above, I agree this will be more helpful to new users. Thanks. Tvpuppy (talk) 16:22, 13 December 2025 (UTC)
Support -- Ooligan (talk) 01:46, 14 December 2025 (UTC)
Done No opposition after one week --Bedivere (talk) 03:51, 18 December 2025 (UTC)
- @Bedivere: The agreement was to link to Special:MyLanguage/Commons:Categories. Could you please update MediaWiki:Pagecategorieslink accordingly. Thanks Schlurcher (talk) 14:25, 18 December 2025 (UTC)?
- Done. Bedivere (talk) 04:07, 19 December 2025 (UTC)
Comment many of the translations are fairly incomplete or outdated. It would be nice if more users helped with these translations. Prototyperspective (talk) 16:39, 19 December 2025 (UTC)
- Done. Bedivere (talk) 04:07, 19 December 2025 (UTC)
Leveraging Reddit content with automatic release requests and a queue
[edit]I’ve been thinking that it would be great to automate some of the outreach that we do via VRT—normally we cold-email copyright holders for material after someone has already identified it as worth uploading to Commons.
Reddit is full of subreddits where people post their own photos—of living things, artifacts, locations, food, etc. Lots of these photos are of a high quality and carry encyclopedic value. What if highly-voted original content on Reddit automatically received a comment explaining Wikimedia licensing and asking the author if they would be willing to release? They could agree right there in the comments. Then the post could be added to a queue/pool where volunteers could verify the license and evaluate the usefulness of the image before uploading to Commons.
I think this would really open a bottleneck in the relicensing of images across the web for Commons. The logistics need to be thought through, but I think it’s achievable through coordinating with subreddit mods to allow these comments (independent bots are a normal part of Reddit, and the API allows reasonable use for free). Zanahary (talk) 17:48, 14 December 2025 (UTC)
- Sounds good :) --PantheraLeo1359531 😺 (talk) 19:11, 14 December 2025 (UTC)
- But the process should be as convenient as possible for the posters, otherwise it could deter them from considering the licensing --PantheraLeo1359531 😺 (talk) 19:13, 14 December 2025 (UTC)
- Yes, I imagine they could just reply “yes” to the automated comment Zanahary (talk) 20:32, 14 December 2025 (UTC)
- unfortunately, I'm unsure just saying "yes" would constitute release and make sure they have reasonable understanding. A message like "I release this photo under CC-BY-4.0" would be appropriate release. All the Best -- Chuck Talk 00:32, 15 December 2025 (UTC)
- +1 --PantheraLeo1359531 😺 (talk) 08:34, 15 December 2025 (UTC)
- That's fine, too. I think that part of the implementation should be simple, with guidance from the copyright-dedicated volunteers and VRT here. Zanahary (talk) 08:37, 15 December 2025 (UTC)
- unfortunately, I'm unsure just saying "yes" would constitute release and make sure they have reasonable understanding. A message like "I release this photo under CC-BY-4.0" would be appropriate release. All the Best -- Chuck Talk 00:32, 15 December 2025 (UTC)
- Yes, I imagine they could just reply “yes” to the automated comment Zanahary (talk) 20:32, 14 December 2025 (UTC)
- But the process should be as convenient as possible for the posters, otherwise it could deter them from considering the licensing --PantheraLeo1359531 😺 (talk) 19:13, 14 December 2025 (UTC)
Support that would be great and thought about this too, hence my post at Commons talk:Permission requests#Example site to find useful media to ask for permission. I think this would be only or most useful for data graphics, not random photos etc or only feasible for a subset of images because mods and/or admins wouldn't allow more frequent posts. If everything that gets more than a threshold of upvotes gets such a request, there would be lots of problematic and/or low-quality media but that may be worth it if this allows scaling this up. Maybe just do this for /r/DataIsBeautiful at first instead of more widely and then consider more subreddits if it works well, one at a time.- Somebody would need to build the bot and then this would need to be accepted effectively by the subreddits mods – I think these two things would be the main challenges.
- Regarding how to declare permission, the comment could say something like "If you're willing to license your image this way, please state so by commenting with 'I the creator of this work license it under CCBY4.0'" and also allow for replies that have the license icon embedded in the image. So far, there's just 3 files from that sub and few files in Category:Images from Reddit. Often, info from the post needs to be included in the file description. Btw, I'm still waiting for a clarification of a chart creator whose file I uploaded here whether it shows the share of operating systems used in user reports or for users who made reports in that month. Prototyperspective (talk) 01:19, 16 December 2025 (UTC)
- @Prototyperspective, the sorts of image subreddits are those like r/Whatisthisbug where people generally upload their own photos, or subreddits where original photographs are marked as such in a machine-readable way through flairs. I think this would avoid the copyright violations we could expect from promoting the poster of any image on reddit past an upvote threshold to say “I release!” Zanahary (talk) 08:24, 16 December 2025 (UTC)
- Good idea! However, it doesn't relate to anything I wrote – probably you thought with
If everything that gets more than a threshold of upvotes gets such a request, there would be lots of problematic and/or low-quality media but that may be worth it
I (also) meant copyvios but I meant things like low-resolution pics of bugs, echo-chamber posts, misinfo posts, educationally useless posts, etc. As said, I think this depends on which subreddits the media is sourced from and secondly, may be worth it and easily manageable as the total number of uploads would probably still be fairly low and these uploads deletable via DRs. Prototyperspective (talk) 13:56, 16 December 2025 (UTC)
- Good idea! However, it doesn't relate to anything I wrote – probably you thought with
- @Prototyperspective, the sorts of image subreddits are those like r/Whatisthisbug where people generally upload their own photos, or subreddits where original photographs are marked as such in a machine-readable way through flairs. I think this would avoid the copyright violations we could expect from promoting the poster of any image on reddit past an upvote threshold to say “I release!” Zanahary (talk) 08:24, 16 December 2025 (UTC)
Oppose per Grand-Duc. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 13:38, 16 December 2025 (UTC)- That's what en:Wikipedia:Images from social media, or elsewhere (which uses Wikipedia as a host, for it's familiarity to non-Wikimedians) is for. Feel free to post a link to that, in Reddit discussions or elsewhere. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:03, 26 December 2025 (UTC)
- Doing that manually at scale is infeasible.
- Actually, no support or discussion is needed here to build & launch a bot that comments with this link and a question whether they license the file under CCBY automatically in relevant reddit posts as long as the respective subreddits' mods are okay with that. Prototyperspective (talk) 10:13, 4 January 2026 (UTC)
Oppose per Grand-Duc. I oppose automating this process, considering how much content on Reddit are reposts. ReneeWrites (talk) 21:13, 26 December 2025 (UTC)
- This would only be added to posts that are original content which are marked with [OC]. Prototyperspective (talk) 10:10, 4 January 2026 (UTC)
Getting Redditors to agree with having their photos on Commons is easy. Getting them through VRT is the hard part--Trade (talk) 14:56, 15 December 2025 (UTC)
- That’s kind of the key here: they just have to comment right there under their own posts, in response to the standard comment about licensing, “I release this post under the CC BY SA 4.0 license”. There’s no VRT processing needed! Zanahary (talk) 16:20, 15 December 2025 (UTC)
Oppose, because of the automation thing. Automating requests makes for a lack of review in regard to questions about COM:SCOPE and FOP. I'd support the idea of creating a tool to post such licensing requests, as long as the requests are only individually triggered by human contributors. The actual implementation could e.g. be a browser addon or a local gadget/tool like "video2commons". Regards, Grand-Duc (talk) 02:00, 16 December 2025 (UTC)
- In the context of what I wrote above: there is no FOP for data graphics and virtually all above a certain threshold are educationally useful. Additionally, it would be good to take into account real-world-practice where a tiny fraction of files that can be deleted via DRs is worth having lots of files to begin with. Prototyperspective (talk) 02:25, 16 December 2025 (UTC)
- Yes to your last point. People are automatically given the option on Flickr, iNaturalist, and YouTube to opt in to licensing their uploads under a Wikimedia-compatible license. This automated solicitation for Reddit would be much the same. Not all uploads released freely are necessarily uploaded to Commons; it’s up to volunteers to consider scope. Zanahary (talk) 08:27, 16 December 2025 (UTC)
- I took a peek at graphics posted in that "Data is beautiful" subreddit. I sincerely doubt that anyone of them is suitable for Commons, as a data source statement is not given every time and you often barely have any context. That's a good example of "not educational useful" (barring for teaching how to abuse statistics for guiding public opinions) and thus being out of scope. Such graphical datasets are handled by Wikipedia editions themselves (there's a Mediawiki extension for that IIRC), directly based upon any referenced data source.
- @Zanahary: Flickr and Youtube made an executive decision to offer the Creative Commons licenses by themselves. I don't know whether the companies were lobbied by the WMF to do so, it could very well have been a thing of gaining a good standing, independent of the Wikimedia movement. If a thing like that, changing a company policy, is your aim, it's far beyond the scope of a Commons VP proposal. Regards, Grand-Duc (talk) 10:20, 16 December 2025 (UTC)
- Take a look at r/WhatIsThisBug for an example of the sort of subreddit that I had in mind.The Flickr/YouTube/Soundcloud/iNat examples are meant as a parallel to show that just prompting uploaders on other websites to release their work under a WM-compatible license doesn't necessarily lead to a flood of inappropriate material on Commons, as it's still up to the Commons community to judge media works for scope and copyright. Zanahary (talk) 14:17, 16 December 2025 (UTC)
- The subreddit "What is this bug" looks and feels, sorry, like crap. When I accessed it, I saw only imagery where the image quality is too low (lack of sharpness) for any sensible import here. The images also apparently lack any sound description and localisation, data needed nowadays for scientific motifs. Furthermore, such images are more often than not shot in densely populated areas. This entails that Commons is likely to already have a coverage on those motifs in a decent quality.
- That subreddit may serve for another purpose: reducing the population in Category:Unidentified insects by country et al....
- I'm actually tempted to upgrade my oppose to a strong one, this Reddit import idea really looks like a stillbirth for me. Regards, Grand-Duc (talk) 18:03, 16 December 2025 (UTC)
- I really don’t get the opposition. Low quality images can be found everywhere, including on websites that we draw a huge volume of media works from and in the rolls of own-work uploads people make directly to Commons. Casting a wide net to encourage copyright holders to release with a license compatible with Commons can, in my view, only make it easier to get quality works for Commons, and doesn’t risk tricking Commons volunteers into shoving junk onto the project. What negative outcome do you imagine from this project? Zanahary (talk) 21:39, 16 December 2025 (UTC)
- Take a look at r/WhatIsThisBug for an example of the sort of subreddit that I had in mind.The Flickr/YouTube/Soundcloud/iNat examples are meant as a parallel to show that just prompting uploaders on other websites to release their work under a WM-compatible license doesn't necessarily lead to a flood of inappropriate material on Commons, as it's still up to the Commons community to judge media works for scope and copyright. Zanahary (talk) 14:17, 16 December 2025 (UTC)
- I thought you're proposing also a bot that uploads files where the permission was given instead of leaving it to volunteers. Whether the permission request in the comments is just for enabling upload or makes the file either automatically or most-likely uploaded makes a great difference to the OC posters on reddit. I don't think many would give permission just to enable a hypothetical Commons upload, at most it would be granted if upload is basically a given.
- @Grand-Duc: Most posts there have the data sources specified in the comments, at least the top-voted ones. Instead of the link above, it's better to look at this. There are lots of huge gaps in data graphics and many of these would be useful. So I think your claim that they don't have the datasources is false at least for a reasonable cut-off votes threshold where a bot requests permission. And there's of course also other subreddits where datasources don't matter because it's just a photo. Prototyperspective (talk) 14:06, 16 December 2025 (UTC)
- No, I think the public nature of the disclosure would lead to trolling if stuff got automatically uploaded on release. I don't see why an otherwise interested redditor would turn down the chance to release their work when told "In order for images to be usable on Wikimedia prohects, including Wikipedia, they need to be licensed under XYZ conditions ... do you agree to release this post under a Wikimedia-compatible license, so that it can be used on Wikimedia projects?" Zanahary (talk) 14:19, 16 December 2025 (UTC)
- Ok well, that's also an approach. I don't see how trolling is a concern though (there is a votes threshold) and don't know how you envision the review queue to work. An idea would be for the bot to update a report page with a table where one column has the external link to to permission granted post and then users manually mark rows as done. Prototyperspective (talk) 15:13, 16 December 2025 (UTC)
- No, I think the public nature of the disclosure would lead to trolling if stuff got automatically uploaded on release. I don't see why an otherwise interested redditor would turn down the chance to release their work when told "In order for images to be usable on Wikimedia prohects, including Wikipedia, they need to be licensed under XYZ conditions ... do you agree to release this post under a Wikimedia-compatible license, so that it can be used on Wikimedia projects?" Zanahary (talk) 14:19, 16 December 2025 (UTC)
- Yes to your last point. People are automatically given the option on Flickr, iNaturalist, and YouTube to opt in to licensing their uploads under a Wikimedia-compatible license. This automated solicitation for Reddit would be much the same. Not all uploads released freely are necessarily uploaded to Commons; it’s up to volunteers to consider scope. Zanahary (talk) 08:27, 16 December 2025 (UTC)
- In the context of what I wrote above: there is no FOP for data graphics and virtually all above a certain threshold are educationally useful. Additionally, it would be good to take into account real-world-practice where a tiny fraction of files that can be deleted via DRs is worth having lots of files to begin with. Prototyperspective (talk) 02:25, 16 December 2025 (UTC)
Weak oppose. Conceptually this isn't a million miles away from User:Red panda bot, which drags in tens of thousands of Flickr photos on the grounds that they were promoted to "Flickr Explore". The bot has a step of human curation, but it's not very scrupulous: it sometimes uploads images with FOP, packaging or AI issues.They could agree right there in the comments. Then the post could be added to a queue/pool where volunteers could verify the license and evaluate the usefulness of the image before uploading to Commons.
This sequence does sound like it's potentially going to be wasting the time of Reddit users, if a bot sometimes asks them to consider and confirm a licence and then ... we decide not to import their image because nobody here wants it, or there's a copyright problem, or we already have better images of the same thing, or we do want it but it took us six months to upload it. It'd undermine the bot's purpose if it started to be ignored or resented in the subreddits that it patrolled.- Grand-Duc's suggestion of a tool to process individual human requests is a good one. Perhaps something like User:CommonsDelinker/commands where a Commons user can add a Reddit URL to a list, and a bot will go off and ask permission (and maybe even upload the file if it gets it) before pinging the requester with an update. Belbury (talk) 19:04, 16 December 2025 (UTC)
Oppose per Grand-Duc and Belbury -- Ooligan (talk) 00:34, 7 January 2026 (UTC)
A page that shows all work on Commons by a given creator
[edit]- in a sortable table, like Paintings by Wassily Kandinsky (Sum of all paintings), but automatic and for every Wikidata-linked creator
The Creator template/tag is great, but there's no easy way to load up all (and only) works on Commons with a given creator tag. I propose a Works:Creator Name, WorksByCreator:Creator Name, CreatorWorks:Creator Name, or similar page that shows all (and only) works on Commons tagged with that creator. The Edinburgh Early Photography Archive (talk) 21:58, 24 December 2025 (UTC)
- @The Edinburgh Early Photography Archive: That's what categories are for, e.g. Category:Works by Samuel Alexander Walker. Sam Wilson 23:23, 24 December 2025 (UTC)
- That requires placing things in categories separately. It doesn't leverage the Creator tag. It won't show you anything tagged with the creator that hasn't also been placed in the category (if the category exists at all). The Edinburgh Early Photography Archive (talk) 06:51, 25 December 2025 (UTC)
- Or Special:Search/insource:"Creator:Creator Name", assuming you mean specifically things tagged with the "Creator" template for that person. E.g. Special:Search/insource:"Creator:Asahel Curtis". - Jmabel ! talk 00:36, 25 December 2025 (UTC)
- Oh yeah, good point. Another way could be 'what links here', e.g. Special:WhatLinksHere/Creator:Samuel_Alexander_Walker. Sam Wilson 03:26, 25 December 2025 (UTC)
- Doing a manual search is not elegant or user-friendly, and again does not leverage the Creator tag properly. Part of the point of the Creator tag must surely be to tie together separate works on Commons with the same creator, in a straightforward and easily-viewable way, not using a complicated search function. Even if it must depend on a complicated search function, there should be a link to it on the Creator template. I should be able to look at a Creator section and easily open up "Other works by this creator". The Edinburgh Early Photography Archive (talk) 06:53, 25 December 2025 (UTC)
- hastemplate is a neater search method: Special:Search/hastemplate:"Creator:Asahel Curtis".
- @The Edinburgh Early Photography Archive has a good point, that the creator template should include this link. RoyZuo (talk) 14:56, 29 December 2025 (UTC)
- Oh yeah, good point. Another way could be 'what links here', e.g. Special:WhatLinksHere/Creator:Samuel_Alexander_Walker. Sam Wilson 03:26, 25 December 2025 (UTC)
- @The Edinburgh Early Photography Archive: Hi, All files by a given creator should be in the category for this creator. Please see Category:Works by Wassily Kandinsky for a prolific artist with hundreds of files order by style of works, and then by date, name, museum, source, subject, genre, etc. That's the main reason for the categories. An unsorted list of works would not be useful, but there is Paintings by Wassily Kandinsky for a list of all works with a Wikidata item. Yann (talk) 09:42, 25 December 2025 (UTC)
- This again depends on people using the categories properly (which may not even exist). It might be a bit overwhelming for very prolific creators, but it would be very useful for less prolific ones. And it could sorted in various ways, for example by title. It could be presented in a table like Paintings by Wassily Kandinsky. That page isn't overwhelming, and it's generated/updated automatically by a bot anyway, so why not an automatic page for any Creator? The Edinburgh Early Photography Archive (talk) 09:49, 25 December 2025 (UTC)
- Your assumption with new users does not make sense. Creating a creator page is much more complex than creating a category. Creator pages are an outdated way to store information about creators anyway. This is now done through Wikidata and these pages are not needed anymore. GPSLeo (talk) 10:41, 25 December 2025 (UTC)
- I just meant that in some cases a Creator page already exists but a "Works by" category does not. I was not aware that Creator pages were outdated. When I look at Creator:Samuel Alexander Walker it looks like it's generated entirely from Wikidata? The Edinburgh Early Photography Archive (talk) 10:44, 25 December 2025 (UTC)
- Actually, having a category is required when creating a Creator template. If the category doesn't exist, a warning is given. Yann (talk) 12:07, 25 December 2025 (UTC)
- My apologies! I want not aware of this. Thank you to everyone for all of the very useful comments. I suppose my question is still, if there is a very useful, Wikidata-based, bot-generated page like Paintings by Wassily Kandinsky, with all works in a table, easily viewable and sortable in many different ways (unlike "Works by" category pages), then why isn't there the same exact (but automatically generated) page, with a nice big sortable table, for any Wikidata-linked creator? I feel that there should be. That's my proposal (taking into account all of the previous comments). The Edinburgh Early Photography Archive (talk) 12:13, 25 December 2025 (UTC)
- Please see related discussion in the thread above at Commons:Village pump/Proposals#Are "Sum of all paintings" project galleries welcome on wiki commons?. Thanks. Tvpuppy (talk) 14:24, 25 December 2025 (UTC)
- Thank you, I see how that discussion is basically what I'm asking about. There seems to be support for the Sum of All Paintings tables in general. In response to that discussion, I would say (1) Why only paintings and not all works? (2) Why bot-generated and not automatic? (3) Why only some selected creators? Who decides which creators deserve a convenient sortable table of all of their works? Why Wassily Kandinsky but not Hilma af Klint? It should be universal for any creator. "Works:Creator Name" could load up a convenient sortable table of all works for any creator, without a human deciding if they or their type of work "qualify". The Edinburgh Early Photography Archive (talk) 14:32, 25 December 2025 (UTC)
Why only paintings and not all works?
Because that is the project that a group of volunteers took on. If you think their scope is too narrow, and you can come in with some resources, they might be open to widening it. If you can't come in with resources, I for one do not recommend approaching them with "why aren't you doing more?"Why bot-generated and not automatic?
Because volunteers can write bots, but cannot modify core wiki code. If you or your organization would like to give a grant to the Wikimedia Foundation to put something like this in the core code, it might be worth discussing. But in terms of funding from WMF, Commons has been something of a red-headed stepchild, and if funds were to become available for a feature of the community's choosing, I cannot imagine this being among the top dozen. See meta:Community Wishlist/Wishes; sort by "votes".Who decides…
The volunteers working on this. I imagine they'd be open to almost any artist being included; someone would have to enter their works into Wikidata (either by hand or by an automated intake of a database compatible with CC-0); also, for Commons at least, there is not much point to doing this for artists who have few or no works yet in the public domain, since we cannot host images of those except insofar as we may have a few via free licenses or freedom of panorama. - Jmabel ! talk 19:46, 25 December 2025 (UTC)- I didn’t realise you needed to donate money for something to be changed or improved. Thank you for explaining. The Edinburgh Early Photography Archive (talk) 19:57, 25 December 2025 (UTC)
- In terms of getting WMF resources for Commons (or any other project) that the Foundation hasn't chosen to allocate, yes, you probably do, and only they can modify the underlying software. I believe that, like most foundations, they would consider targeted grants to do a specific piece of work that they consider positive but not otherwise a top priority. Otherwise, someone coming in more or less from the outside and saying "spend your money differently" isn't going to have much influence.
- But when I said "If you can't come in with resources" above, I mainly meant additional volunteers to do the work. The Foundation provides very little support for Commons, mainly server hosting and the use of mediawiki and wikibase software; there have at times been as many as perhaps half a dozen WMF FTEs (FTE => "full-time equivalents", the equivalent of having on dedicated full-time employee) devoted to Commons. At the moment, it would surprise me if the FTE equivalent is more than 2. We do a lot of things with bots that would, in theory, better be core functions because (as I said above) volunteers can write bots. - Jmabel ! talk 20:42, 25 December 2025 (UTC)
- Thank you so much for your patience and for explaining further. I hope my questions didn't come off as questioning the efforts of volunteers at Sum of all paintings, or the scope of their project. I think their work is fantastic and very useful. I only meant to propose that a convenient table-based view of all (Wikidata-linked) works on Commons by any given (Wikidata-linked) creator should be considered for future core functionality. I appreciate your explanation of why it isn't core functionality currently, and the challenges in having a change like this made, especially when there are other items that the community feel are more important. Thank you for highlighting possible ways forward as well. The Edinburgh Early Photography Archive (talk) 12:54, 26 December 2025 (UTC)
- You can create a page like "Sum of all paintings" for any artist. Paintings are more easily managed, copyright wise. We can copy them from any source if the paintings are in the public domain. That's not the case for other kinds of works, like statues, where the photographer gets a copyright. Yann (talk) 15:09, 26 December 2025 (UTC)
- Thank you, this is probably the best way forward for me! My interest is early photographers who are already fully in the public domain eg Creator:John_Moffat or Creator:Samuel Alexander Walker. So just to confirm, I could create a "Photographs by Samuel Alexander Walker" page with all of his Wikidata-tagged works in a sortable table (and possibly add a link to it on his Creator page somehow), and that would be allowed? The Edinburgh Early Photography Archive (talk) 17:54, 26 December 2025 (UTC)
- Seems reasonable to me, as long as we have images for a reasonable number of them. (I wouldn't want to see something like this on Commons for someone where we list, say 2000 works for which we have only 5 of them with images, and don't have any clear prospect of getting many more; not sure exactly where the cutoff would be.) - Jmabel ! talk 18:03, 26 December 2025 (UTC)
- Thank you, this is probably the best way forward for me! My interest is early photographers who are already fully in the public domain eg Creator:John_Moffat or Creator:Samuel Alexander Walker. So just to confirm, I could create a "Photographs by Samuel Alexander Walker" page with all of his Wikidata-tagged works in a sortable table (and possibly add a link to it on his Creator page somehow), and that would be allowed? The Edinburgh Early Photography Archive (talk) 17:54, 26 December 2025 (UTC)
- You can create a page like "Sum of all paintings" for any artist. Paintings are more easily managed, copyright wise. We can copy them from any source if the paintings are in the public domain. That's not the case for other kinds of works, like statues, where the photographer gets a copyright. Yann (talk) 15:09, 26 December 2025 (UTC)
- Thank you so much for your patience and for explaining further. I hope my questions didn't come off as questioning the efforts of volunteers at Sum of all paintings, or the scope of their project. I think their work is fantastic and very useful. I only meant to propose that a convenient table-based view of all (Wikidata-linked) works on Commons by any given (Wikidata-linked) creator should be considered for future core functionality. I appreciate your explanation of why it isn't core functionality currently, and the challenges in having a change like this made, especially when there are other items that the community feel are more important. Thank you for highlighting possible ways forward as well. The Edinburgh Early Photography Archive (talk) 12:54, 26 December 2025 (UTC)
- I didn’t realise you needed to donate money for something to be changed or improved. Thank you for explaining. The Edinburgh Early Photography Archive (talk) 19:57, 25 December 2025 (UTC)
- Thank you, I see how that discussion is basically what I'm asking about. There seems to be support for the Sum of All Paintings tables in general. In response to that discussion, I would say (1) Why only paintings and not all works? (2) Why bot-generated and not automatic? (3) Why only some selected creators? Who decides which creators deserve a convenient sortable table of all of their works? Why Wassily Kandinsky but not Hilma af Klint? It should be universal for any creator. "Works:Creator Name" could load up a convenient sortable table of all works for any creator, without a human deciding if they or their type of work "qualify". The Edinburgh Early Photography Archive (talk) 14:32, 25 December 2025 (UTC)
- Please see related discussion in the thread above at Commons:Village pump/Proposals#Are "Sum of all paintings" project galleries welcome on wiki commons?. Thanks. Tvpuppy (talk) 14:24, 25 December 2025 (UTC)
- My apologies! I want not aware of this. Thank you to everyone for all of the very useful comments. I suppose my question is still, if there is a very useful, Wikidata-based, bot-generated page like Paintings by Wassily Kandinsky, with all works in a table, easily viewable and sortable in many different ways (unlike "Works by" category pages), then why isn't there the same exact (but automatically generated) page, with a nice big sortable table, for any Wikidata-linked creator? I feel that there should be. That's my proposal (taking into account all of the previous comments). The Edinburgh Early Photography Archive (talk) 12:13, 25 December 2025 (UTC)
- Actually, having a category is required when creating a Creator template. If the category doesn't exist, a warning is given. Yann (talk) 12:07, 25 December 2025 (UTC)
- I just meant that in some cases a Creator page already exists but a "Works by" category does not. I was not aware that Creator pages were outdated. When I look at Creator:Samuel Alexander Walker it looks like it's generated entirely from Wikidata? The Edinburgh Early Photography Archive (talk) 10:44, 25 December 2025 (UTC)
- Your assumption with new users does not make sense. Creating a creator page is much more complex than creating a category. Creator pages are an outdated way to store information about creators anyway. This is now done through Wikidata and these pages are not needed anymore. GPSLeo (talk) 10:41, 25 December 2025 (UTC)
- This again depends on people using the categories properly (which may not even exist). It might be a bit overwhelming for very prolific creators, but it would be very useful for less prolific ones. And it could sorted in various ways, for example by title. It could be presented in a table like Paintings by Wassily Kandinsky. That page isn't overwhelming, and it's generated/updated automatically by a bot anyway, so why not an automatic page for any Creator? The Edinburgh Early Photography Archive (talk) 09:49, 25 December 2025 (UTC)
Feedback requested: A free online tool for extracting images from video
[edit]Hi everyone,
I noticed that some users struggle with extracting high-quality images from video files for upload. I have developed a free online tool called [Video To JPG] (link: https://videotojpg.com) that helps extract JPG frames from videos.
It is free to use and I believe it could be helpful for editors who want to create thumbnails or extract specific frames from video content.
I would appreciate any feedback from the community:
1. Is this tool useful for your workflow?
2. Are there any features I should add to make it more Commons-friendly (e.g., PNG output, specific metadata)?
Note: I am the developer of this tool and I am posting here to seek consensus before adding it to any help pages, to avoid conflict of interest.
Thank you! Charlesding2024 (talk) 05:14, 28 December 2025 (UTC)
- This would be really helpful! I would look at Commons:CropTool as a model for how to integrate this sort of tool into Commons. Basically, it's loaded as an option in the left bar when viewing a compatible file, and automatically uploads the new extraction with correct metadata and a link to the file from which it was sourced. Zanahary (talk) 15:36, 29 December 2025 (UTC)
- Also, if it's not open-source, I don't see any Wikimedia project adopting it. I don't know if that a rule. Zanahary (talk) 15:37, 29 December 2025 (UTC)
- @Charlesding2024: You could add wikitext output that indicates the source and the timecode within the source, for easier review. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 20:03, 29 December 2025 (UTC)
- @Jeff G. Thanks for the wikitext suggestion! I would love to implement this to make the review process smoother.
- Could you provide an example of the specific format or templates you would prefer? For instance, should I generate a full
{{Information}}template, or just a specific tag like{{Extracted from}}with the timecode? - Having a sample of your desired output would help me ensure it fits the Commons workflow perfectly. Charlesding2024 (talk) 07:04, 30 December 2025 (UTC)
- @Charlesding2024: I suppose you could start with File:Elephants Dream s6 both.jpg, which is used as an illustration of the "Audiovisual works" section of COM:SCREENSHOT. I don't know of a standard method of indicating the timing other than "at 1m02s" (as distinguished from "at 1h02m"). {{Extracted from}} does not seem to fit your client-side focus. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:20, 30 December 2025 (UTC)
- @Jeff G. Thanks for the inspiration! Since my tool is designed for batch extraction (users often pick multiple frames at once), I realized a single text box wouldn't suffice.
- I plan to implement a comprehensive solution covering three aspects to fit the Commons workflow:
- Smart Filenames: Extracted images will be automatically named with the timestamp (e.g.,
MyVideo_at_1m02s.jpg). This ensures the metadata is preserved in the file itself when uploading. - Gallery View Copy: In the results grid, I will add a small 'Copy Source' button under each image, generating the specific text you suggested:
Source: Extracted from [Filename] at [Time]. - Batch Metadata Export: For heavy users, I'll add an option to copy/export a text list of all source info for the current batch at once.
- Smart Filenames: Extracted images will be automatically named with the timestamp (e.g.,
- I can implement all of these features. Does this sound like a solid workflow for editors? If so, I'll get to work on it immediately! Charlesding2024 (talk) 16:25, 4 January 2026 (UTC)
- @Jeff G. I've just updated the tool based on your feedback! You can try the new features live on VideoToJPG.com.
- Implementation Details:
- Smart UI: I added a "Src" dropdown at the bottom of each thumbnail. Users can now choose between "Plain Text" (standard) or "Wikitext" (for linked files). I made this selectable to avoid forcing "red links" if the source video isn't on Commons.
- Batch Support: The ZIP download now includes a separate
source_info_wiki.txtalongside the standard metadata, making batch uploads easier.
- A quick question on placement: Now that the tool is optimized for the Commons workflow, do you think it would be appropriate to list it on the [Commons:Tools] page?
- If so, would you recommend placing it under the "Upload media" section or "Maintenance"? I'd value your advice on where it would be most discoverable for editors. Charlesding2024 (talk) 09:37, 5 January 2026 (UTC)
- @Charlesding2024: I suppose you could start with File:Elephants Dream s6 both.jpg, which is used as an illustration of the "Audiovisual works" section of COM:SCREENSHOT. I don't know of a standard method of indicating the timing other than "at 1m02s" (as distinguished from "at 1h02m"). {{Extracted from}} does not seem to fit your client-side focus. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:20, 30 December 2025 (UTC)
- Aside from the open source requirement that Zanahary has already noted, a PNG lossless output would be sweet, I've used ffmpeg for that before, maybe you could look into integrating ffmpeg. JPEG is a visually lossy format: you always get some artifacts / loss of detail when converting an image to JPEG. It's moon (talk) 20:51, 29 December 2025 (UTC)
- @It's moon Thank you for the detailed feedback regarding image quality and artifacts.
- You are absolutely right about JPEG. I am happy to let you know that I already support lossless PNG output. You can try the specific feature here: https://videotojpg.com/video-to-png.
- Regarding the open-source mention: I plan to open-source the core processing logic in the future to facilitate community review.
- Also, similar to your
ffmpegsuggestion, my tool runs entirely client-side (in the browser). This means it achieves local processing speed and privacy without needing to upload files to a server. Charlesding2024 (talk) 07:25, 30 December 2025 (UTC)- That is really cool! I will definitely use your tool. Another area where you could improve it further, if you are interested in more feedback, would be to support video URLs as input (and even YouTube URLs if you wanted to take it to the next level). A lot of tools on MediaWiki allow users to provide a file URL instead of needing to download something to reupload it. It is really useful to prevent junk from piling up on users' hard drives. It's moon (talk) 08:10, 30 December 2025 (UTC)
- @It's moon I'm glad to hear that you find the tool useful!
- Regarding the URL input (and YouTube) support: That is indeed a very convenient feature to save disk space.
- However, since my tool runs entirely client-side, fetching videos from external URLs (especially YouTube) is technically challenging due to browser CORS (Cross-Origin Resource Sharing) restrictions. It usually requires a backend server to proxy the data, which I am currently avoiding to keep the tool private and cost-free.
- That said, I will investigate if there are any client-side solutions to load videos from CORS-enabled sources (like Wikimedia Commons files) directly! Charlesding2024 (talk) 08:33, 30 December 2025 (UTC)
- @It's moon Update: I've added support for direct URL loading in v1.4.2. Since this is client-side, it requires the source to support CORS. I also added a fallback dialog to guide users when CORS blocks the request. Thanks again for the idea! ~~~~ Charlesding2024 (talk) 05:38, 3 January 2026 (UTC)
- Nice work! It's moon (talk) 05:45, 3 January 2026 (UTC)
- That is really cool! I will definitely use your tool. Another area where you could improve it further, if you are interested in more feedback, would be to support video URLs as input (and even YouTube URLs if you wanted to take it to the next level). A lot of tools on MediaWiki allow users to provide a file URL instead of needing to download something to reupload it. It is really useful to prevent junk from piling up on users' hard drives. It's moon (talk) 08:10, 30 December 2025 (UTC)
- I don't see what the advantages to just creating screen stills in your local video player are. Why would one use this website instead of just take stills in video players? In MPV and VLC player I think one just presses ctrl+s to save the full-resolution still. Prototyperspective (talk) 10:17, 4 January 2026 (UTC)
- @Prototyperspective: You raise a valid point regarding local players like VLC.
- However, the key feature that distinguishes this tool is its built-in **Blur Detection**.
- When extracting frames from a video (especially with motion), it is often difficult to tell by eye which specific frame is the sharpest. My tool analyzes the frames and provides a **sharpness score**, helping editors objectively identify and save the highest quality frame possible.
- This is something standard video players don't usually offer, and it's specifically designed to help upload better quality images to Commons. ~~~~
- Charlesding2024 (talk) 11:25, 4 January 2026 (UTC)
Should category:hentai (very NSFW), category:fan service (kind of NSFW) and to a lesser extent category:ecchi (NSFW) be deprecated in favor of only using more objective categories?
[edit]I’ve had an issue with these categories for a while even though I’m pretty sure I created category:ecchi a long time ago. They’re all lacking in an agreed-upon definition and redundant to each other. Hentai is a western neologism for anime-styled porn; that meaning doesn’t exist in Japan (beyond similarly informal terms like “H-manga”) and even in the west it’s a pretty informal category that means different things to different people. Fan service isn’t even necessarily sexy; it’s just stuff gratuitously added to please the audience, which could be anything. Ecchi is at least sort of defined as a genre basically akin to “sex comedy” but I’m not sure you could objectively determine whether an individual image is “Ecchi” given it also just means “sexy in a fun, playful, not super explicit way”.
because of these issues I propose that Category:Hentai in anime and manga (nsfw) be deleted; category:Hentai be restricted to files/cats about western hentai providers like Fakku and maybe category:Ahegao (nsfw-ish); category:fan service just be deleted entirely and category:ecchi be redirected to category:ecchi anime and manga and only be used as a category for files/categories about works in the genre. We also have categories like Category:Nude or partially nude people in anime and manga (nsfw) Category:People having sex in anime and manga (nsfw) Category:Swimwear in anime and manga (kind of nsfw) and Category:Lingerie in anime and manga (kind of nsfw) that cover the same scope without the overlap and subjectivity issues. Dronebogus (talk) 22:21, 5 January 2026 (UTC)
- Have you considered bringing this up at
COM:CFICOM:CFD? I think it belongs there. If you want visibility, you can start a CFD discussion and then (maybe after a while) post here a pointer. whym (talk) 01:20, 11 January 2026 (UTC)- I don’t think COM:CFI is the right link Dronebogus (talk) 01:51, 11 January 2026 (UTC)
- Fixed. whym (talk) 02:49, 11 January 2026 (UTC)
- I considered that, but there’s so many categories that would be affected by this in different but overlapping ways. Plus CFD has a backlog that stretches to the moon and back Dronebogus (talk) 04:25, 11 January 2026 (UTC)
- Fixed. whym (talk) 02:49, 11 January 2026 (UTC)
- My experience has been that CfD is mostly effective as a way of gaining consensus for simple proposed actions like renaming, merging, or deleting categories. It's less effective in more complex situations, or where the nominator isn't sure of what to do. Omphalographer (talk) 22:07, 11 January 2026 (UTC)
- I don’t think COM:CFI is the right link Dronebogus (talk) 01:51, 11 January 2026 (UTC)
Adding a thing in block notices
[edit]Hello everyone, I would want the community to discuss on a suggestion made by @0x0a at COM:ANU. For my view on it, I happen to agree with them. I quote some new users may not be aware of our blocking policy And our block message box doesn't explicitly state that creating a new account during the block period is not allowed, which might lead them into an endless cycle of block and block evasion. I found it necessary to clearly state this rule in the block message box.
I would say that we can adjust the block notices that states that the user shouldn't create a new account as that will further lead to blocks and bans for socking. Shaan SenguptaTalk 14:08, 7 January 2026 (UTC)
- I second this motion. 0x0a (talk) 14:35, 7 January 2026 (UTC)
- Votes/Comments
Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 15:34, 7 January 2026 (UTC)
Support --Mateus2019 (talk) 16:04, 7 January 2026 (UTC)
Support I have a hard time believing anyone with any amount of common sense or experience with the internet wouldn't know not to just create a new account, but the addition can't hurt. The Squirrel Conspiracy (talk) 18:12, 7 January 2026 (UTC)
Support It removes plausible deniability. JayCubby (talk) 15:47, 9 January 2026 (UTC)
Support For anyone with common sense, sure. But a lot of users who get blocked seem to lack common sense - apparently it isn't as common as the phrase would imply? - so we might as well spell it out. Omphalographer (talk) 08:26, 10 January 2026 (UTC)
- As to why it's not obvious, I wonder if the root cause might be an assumption that most account suspensions are automated (which can be true for other platforms that new users are more familiar with). If so, we might want to let them know that humans (rather than a big, faceless and glitchy automated system) block accounts here and you are expected to engage with those humans when you want to be unblocked. whym (talk) 01:10, 11 January 2026 (UTC)
Support If the obvious needs to be explicitly stated. --Yann (talk) 18:14, 7 January 2026 (UTC)
Support No down-side. - Jmabel ! talk 03:51, 8 January 2026 (UTC)
Support change to {{Blocked}} template, it would be more helpful to newcomers. Thanks. Tvpuppy (talk) 03:20, 9 January 2026 (UTC)
Support Infrogmation of New Orleans (talk) 15:59, 9 January 2026 (UTC)- I won't oppose, but let's keep it short. A 40 word increase would probably be too much. I would go for a short sentence (10-20 words) about it, with a link to a longer explanation if necessary. whym (talk) 01:08, 11 January 2026 (UTC)
- @Whym, would you like to suggest a draft? Or maybe @Tvpuppy, you did good work with DR notice. Anyone else is also invited since this has been supported so far. Shaan SenguptaTalk 05:09, 11 January 2026 (UTC)
Support And I'll proceed forth making a text suggestion (with some placeholder words where Wikitext would collide with the quote template) in the new section below. Grand-Duc (talk) 03:38, 12 January 2026 (UTC)
Support I see this as a step in the right direction. Wolverine X-eye 09:37, 12 January 2026 (UTC)
Text renovation workbench
[edit]Current text in {{Blocked}}:
You have been blocked from editing Commons for a duration of TIME for the following reason: REASON.
If you wish to make useful contributions, you may do so after the block expires. If you believe this block is unjustified, you may add UNBLOCK REQUEST below this message explaining clearly why you should be unblocked. See also the block log. For more information, see Appealing a block.
I suggest the following additions (in italics here):
You have been blocked from editing Commons for a duration of TIME for the following reason: REASON. A human reviewed your contributions and found them against Commons' rules.
If you wish to make useful contributions, you may do so after the block expires. Creating a new account while this block is in force is in itself a blockable offense and can lead to a permanent exclusion! Do not try to game the system. If you believe this block is unjustified, you may add UNBLOCK REQUEST below this message explaining clearly why you should be unblocked. See also the block log. For more information, see Appealing a block.
— Preceding unsigned comment added by Grand-Duc (talk • contribs) 03:38, 12 January 2026 (UTC)
- Alternative suggestions for the italicized passages:
- An administrator has reviewed your contributions and found them to be against Commons' rules.
- Creating a new account while this block is in force is itself a blockable offense and may lead to permanent exclusion from Commons.
- However, neither that nor the wording above works for an indef-block, where we need something more like Creating a new account while this block is in force is itself a blockable offense and makes it very unlikely that your block will ever be rescinded."
- And when we block accounts for being sockpuppets, even that is not on the mark; in that case we either can omit this or need something clarifying that this sockpuppet account will almost certainly never be unblocked.
- Jmabel ! talk 05:53, 12 January 2026 (UTC)
- The point made by Whym above at Revision #1145790913, with
I wonder if the root cause might be an assumption that most account suspensions are automated (which can be true for other platforms that new users are more familiar with)
stirred me. I think that it'll be worth to underline that humans do the blocking, and it's not necessarily clear that something called administrator is actually human, when going by experiences in social network or online game environments. - Indeed, I did not think about sockpuppets. But Jmabel's suggestion is in my opinion a sound starting point to work on or adapt outright. About socks: either a boolean switch "sock Y/N" would be needed, and isn't there {{Sockpuppet}} available already? Grand-Duc (talk) 07:02, 12 January 2026 (UTC)
- {{Sockpuppet}} goes on the user page, not the user talk page, and is not addressed to the user themself but to admins and others acting in a quasi-administrative capacity. - Jmabel ! talk 20:42, 12 January 2026 (UTC)
- The point made by Whym above at Revision #1145790913, with
