Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (111)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (4414)

  • Output a video with "slide up" transition using more than 100 images in FFMPEG ?

    9 juillet 2021, par Joseph Ladera Fugata

    I have more than a hundred images of the same size and format that my company wants to display at the big 9:16 (rotated 16:9) screen outside the front gate. It's supposed to be easy but they required me to have it slide from top to bottom, meaning that it should look like a smooth auto scroll effects. I searched here and there but no luck.

    


    I have tried xfade like this :

    


    ffmpeg -loop 1 -i input.txt -filter_complex
"xfade=transition=slideup:duration=10:offset=0,format=yuv420p" output.mp4


    


    It didn't do anything just a bunch of error referring to the inputs. Which is supposed to be just 2 images in the first place.

    


    The next thing I tried was using Concat from someone named @Gyan at his reply HERE and here's my version of the code :

    


    ffmpeg -y -f concat -safe 0 -i input.txt
-vf tile=1x%img_count%,loop=%_my_loop_count_var%:1:0,
crop=iw:ih/%img_count%:0:clip((t-%_start_time%)/%sec_per_img%*ih/%img_count%\,0\,ih*%img_count_minus_one%/%img_count%)
-r 25 -c:v libx264 -preset ultrafast output.mp4


    


    When I played with it, it gives a different output even do the image are all the same dimensions.

    


    I found someone on youtube used this but it is using bash and I am on windows. Unless someone here can convert it to a batch script would be great. I look into it and it seems like he's just V-stacking them kinda like what I did but there's more. I know I could have gone through win bash but I doubt the script will run on a non-Unix environment just by having bash, and I'm not yet familiar with Cygwin either.

    


    I also did tried other options posted by others here, I just forgot to bookmark them, but non of them works on more than a hundred images.

    


    I love to hear a response if anyone can help.

    


  • What google cloud service can be used to process files stored in Firebase Cloud Storage with FFmpeg ? [closed]

    1er mai 2021, par uponly

    I am building a ReactJs app and I am trying to figure out a way to process files (images, videos, and audio of any type) that are stored in my Firebase storage bucket using FFmpeg. Currently, I have set up the functionality for allowing the user to upload files to my storage bucket, and a corresponding URL link is stored in a document in Firestore.

    


    Ideally, I'd love to do this with Cloud Functions HTTP triggers because I have all of that setup already. It would be nice to just call an HTTP trigger to process the file after it has been uploaded. However, after a bit of research, my current understanding is to somehow deploy my app using a flexible Google App Engine environment because apparently, it is the only way to set a manual timeout in case I have to process a long, high-quality video, for example. Thus I wouldn't be able to use Cloud Functions because there is a very short timeout period which may lead to the files not being fully processed.

    


    Here is the user flow I am trying to achieve, which may help make things more clear :

    


      

    1. [Done] The user uploads a file (image, audio, or video) to Firebase cloud storage. A URL is also stored in their corresponding user document in Firestore.
    2. 


    3. [Here and the steps onward are what I am trying to achieve] After the file has been stored, I'd like to kick off some sort of function that grabs the newly stored file and begin to process it in the cloud.
    4. 


    5. Store the newly processed file back into the Cloud Storage bucket
    6. 


    7. Allow the user to preview the processed file (by streaming it ideally, if possible).
    8. 


    


    In steps 2 and onward, I am just generally confused about what Google service I should be using to process my file in the cloud with FFmpeg. As well as how I can connect it to my React app, client-side. If I have to go the Google App Engine route, how do I go about connecting app engine to my React App such that I don't have to build my app and deploy it, as my app is still in development ?

    


    This is not a coding question so I apologize if this is the wrong place to post in. I am new to all this, any and all help is greatly appreciated. Thank you.

    


  • How can I schedule a YouTube livestream entirely from Linux ?

    25 avril 2021, par Dale Wellman

    I have a setup on a Raspberry Pi (with its native camera) that uses a cronjob to start an ffmpeg session with its output streaming to YouTube. I re-use the same stream key each time, which is written into my ffmpeg scripts. This all works perfectly each week, automatically starting and stopping at the desired time.
However, each week PRIOR to that livestream, I have to "manually" go into YouTube Studio and "schedule" a new future event. This is easy enough, since it lets me "reuse" previous settings — all I have to change is the Title, date, and time. But I would love to figure out a way to automate that part of the process, as well. I assume it involves using the YouTube Data API, but I'm not well versed in API's, JSON, etc.
(I do have a strong Linux background, bash scripting skills, and general programming background.)

    


    My final solution just needs to :

    


      

    • create the new scheduled event (maybe 12 hours prior to going live), with Title, Date, Time, "Unlisted" status, category, and so forth — all the usual settings I do manually within Studio
    • 


    • retrieve the assigned URL for the upcoming stream (my script will then email that to me)
    • 


    


    So, basically, I'm asking for help getting started with the API, or whatever method is capable of doing this. I would prefer to code it on the same Pi that does the ffmpeg encoding (although in a pinch, I could create the schedule from another computer, even Windows). Any examples would be great.

    


    So far, all I have done is create my Google project, enable the YouTube Data API in the project, and create my API key. But I'm not sure where to go from there.