I did make a classier for for videos that inputs title, tags, description, and closed caption into an LLM. I got roughly 1000 entries classified that way, issue is that most of them were non-english videos and then new videos come from somewhere on peertube that don’t hace these classifiers.
Video processing is cool just expensive computationally. Also watchers could classify the videos themselves then use a cosine similarity (or whatever algo) on that. I did suggest to peertube to share the categories people say a video is with other people (like it’s a Mastodon post) eventually it morphed into an idea light weight peertube instance that only does api.
apparently not id have to convert the scripts, and i have a lot of issues with the current script
I’ve been developing for brave so it will work for chrome and it should work for firefox. for brave its easy you put the github files into a folder, put brave browser into development mode, and then manage your extensions by adding the folder, you press f12 to get the console and look at the variables in extension storage.
if you get it to load properly you should see 🔄 Starting fetch for template: https://dalek.zone/api/v1/videos?sort=-publishedAt&nsfw=both&count=10
in devTools on brave
i figured that some did it when elon banned that account on twitter i never got around to looking for it
lol we ignore real problems in the world to do this
i wouldn’t endorse violence if the rich werent actively doing violence
really im thinking about what data is okay to share and what data should be kept to the user. basically I determined that description of the video is only thing that can be public and the people/bot describing it okay to share (like associating their channel to a description they make to specific video) and the watchers device can collect video meta data to find suggestions
I like that idea for a stupid simple algorithm. ironically I plan for there to be like a Varity of algorithms both that are user only and a aggregate. really im trying to pin down a standardized video vector that can describe any video to any level of detail
be better to store the video vector on an instance so that watchers can retrieve, just logistics. video vector (element) can be calculated anywhere just communicated to an instance, the idea is to be flexible. activityhub protocol has made the decisions easy the video vector has to be a .json element in a video json data.
it would be better to store the results of a calculation to avoid repeated calculations. im looking into music classifications, and like the entire video can be sent to parse to see if its music or not, the tempo, genra, id assume that would be fairly costly to calculate or instance can send the video vector that states all that information
im not at the aggregating data stage but you can just put a random id on a data set
i havent made anything yet i just wanted to articulate that a basic algorythm can be done ethically where either instance/watcher/fediverse in general can make a vector to define a video and that could be shared via activity hub and the user can have a vector for themselves and even their own algorithm to sift through videos.
im just starting and right now i have to figure out how to format the video vector do i want .json .csv .xml
woot woot under capitalism your dna is a commodity
i never got that. arms shouldn’t be limited to firearms, clearly the founding fathers fought with swords and bayonets so that would have been on their mind
its a browser extension for brave (or any chrome based browser) it’s in the github readme. recomendation algo was self explanatory. it’s meant to recommend you videos on peertube. i only screen shot the only ui that exists, the only things I can screenshot is variables stored in indexdedDb and local extension.
also the installations instructions are in the github readme