Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (51)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (6905)

  • RTSP to RTMP using FFMPEG on Raspberry Pi to YouTube Livestream ends prematurely (and sometimes doesn't start)

    12 mars 2021, par user203875

    I have been running a program from my raspberry pi 4 that converts a RTSP network camera feed to RTMP for YouTube. The stream used to run non-stop every day. I didn't have to do anything. I have a program in place that would restart if the feed died.

    


    Nothing has changed with that program in 2 years. About a month ago, the feed stopped working. I just started into trying to figure out why. Maybe someone has some ideas on what is going on ?

    


    In order for me to start the feed, I must also start a studio.youtube.com browser session showing the feed information. If that web page is up and running, the live stream will start. While I can't say for certain that it NEVER starts without this session running, it seems that way.

    


    Usually the stream lasts for an hour or two. Rarely more than four hours.

    


    When I start up a studio.youtube.com session after the stream dies the "Dimiss" or "Edit in Studio" message is on the page. I have to hit "dismiss" before I can start up the stream again.

    


    Is there a solution to this ?

    


    Again, my program didn't change, so I'm at a loss for what I can do to fix this.

    


  • what is the faster way to load a local image using javascript and / or nodejs and faster way to getImageData ?

    4 octobre 2020, par Tom Lecoz

    I'm working on a video-editing-tool online for a large audience.
Users can create some "scenes" with multiple images, videos, text and sound , add a transition between 2 scenes, add some special effects, etc...

    


    When the users are happy with what they made, they can download the result as a mp4 file with a desired resolution and framerate. Let's say full-hd-60fps for example (it can be bigger).

    


    I'm using nodejs & ffmpeg to build the mp4 from HtmlCanvasElement.
Because it's impossible to seek perfectly frame-by-frame with a HtmlVideoElement, I start to convert the videos from each "scene" in a sequence of png using ffmpeg.
Then, I read my scene frame by frame and , if there are some videos, I replace the videoElements by an image containing the right frame. Once every images are loaded, I launch the capture and go to the next frame.

    


    Everythings works as expected but it's too slow !
Even with a powerfull computer (ryzen 3900X, rtx 2080 super, 32 gb of ram , nvme 970 evo plus) , in the best case, I can capture basic full-hd movie (if it contains videos inside) at 40 FPS.

    


    It may sounds good enought but it's not.
Our company produce thousands of mp4 every day.
A slow encoding process means more servers at works so it will be more expensive for us.

    


    Until now, my company used (and is still using) a tool based on Adobe Flash because the whole video-editing-tool was made with Flash. I was (and am) in charge to translate the whole thing into HTML. I reproduced every feature one by one during 4 years (it's by far my biggest project) and this is the very last step but even if the html-version of our player works very well, the encoding process is much slower than the flash version - able to encode full-hd at 90-100FPS - )

    


    I put console.log everywhere in order to find what makes the encoding so slow and there are 2 bottlenecks :

    


    As I said before, for each frame, if there are videos on the current scene, I replace video-elements by images representing the right frame at the right time. Since I'm using local files, I expected a loading time almost synchronous. It's not the case at all, it required more than 10 ms in most cases.

    


    So my first question is "what is the fastest way to handle local image loading with javascript used as final output ?".

    


    I don't care about the technology involved, I have no preference, I just want to be able to load my local image faster than what I get for now.

    


    The second bottleneck is weird and to be honest I don't understand what's happening here.

    


    When the current frame is ready to be captured, I need to get it's data using CanvasRenderingContext2D.getImageData in order to send it to ffmpeg and this particular step is very slow.

    


    This single line

    


    let imageData = canvas.getContext("2d").getImageData(0,0,1920,1080);  


    


    takes something like 12-13 ms.
It's very slow !

    


    So I'm also searching another way to extract the pixels-data from my canvas.

    


    Few days ago, I found an alternative to getImageData using the new class called VideoFrame that has been created to be used with the classes VideoEncoder & VideoDecoder that will come in Chrome 86.
You can do something like that

    


    let buffers:Uint8Array[] = [];
createImageBitmap(canvas).then((bmp)=>{
   let videoFrame = new VideoFrame(bmp);
   for(let i = 0;i<3;i++){
      buffers[i] = new Uint8Array(videoFrame.planes[id].length);
      videoFrame.planes[id].readInto(buffers[i])
   }
})


    


    It allow me to grab the pixel data around 25% quickly than getImageData but as you can see, I don't get a single RGBA buffer but 3 weirds buffers matching with I420 format.

    


    In an ideal way, I would like to send it directly to ffmpeg but I don't know how to deals with these 3 buffers (i have no experience with I420 format) .

    


    I'm not sure at all the solution that involve VideoFrame is a good one. If you know a faster way to transfer the data from a canvas to ffmpeg, please tell me.

    


    Thanks for reading this very long post.
Any help would be very appreciated

    


  • Started Programming Young

    6 septembre 2011, par Multimedia Mike — Programming

    I have some of the strangest memories of my struggles to jump into computer programming.

    Back To BASIC
    I remember doing some Logo programming on Apple II computers at school in 5th grade (1987 timeframe). But that was mostly driving turtle graphics. Then I remember doing some TRS-80 BASIC in 7th grade, circa 1989. Emboldened by what very little I had learned in perhaps the week or 2 we took in a science class to do this, I tried a little GW-BASIC on my family’s “IBM-PC compatible” computer (they were still called that back then). I still remember what my first program consisted of. Even back then I was interested in manipulating graphics and color on a computer screen. Thus :

    10 color 1
    20 print "This is color 1"
    30 color 2
    40 print "This is color 2"
    ...
    

    And so on through 15 colors. Hey, it did the job– it demonstrated the 15 different colors you could set in text mode.

    What’s FOR For ?
    That 7th grade computer unit in science class wasn’t very thick on computer science details. I recall working with a lab partner to transcribe code listings into a computer (and also saving my work to a storage cassette). We also developed form processing programs that would print instructions to input text followed by an “INPUT I$” statement to obtain the user’s output.

    I remember there was some situation where we needed a brief delay between input and printing. The teacher told us to use a construct of the form :

    10 FOR I = 1 TO 20000
    20 NEXT I
    

    We had to calibrate the number based on our empirical assessment of how long it lasted but I recall that the number couldn’t be much higher than about 32000, for reasons that would become clearer much later.

    Imagine my confusion when I would read and try to comprehend BASIC program code I would find in magazines. I would of course see that FOR..NEXT construct all over the place but obviously not in the context of introducing deliberate execution delays. Indeed, my understanding of one of the fundamental building blocks of computer programming — iteration — was completely skewed because of this early lesson.

    Refactoring
    Somewhere along the line, I figured out that the FOR..NEXT could be used to do the same thing a bunch of times, possibly with different values. A few years after I had written that color program, I found it again and realized that I could write it as :

    10 for I = 1 to 15
    20 color I
    30 print I
    40 next I
    

    It still took me a few more years to sort out the meaning of WHILE..WEND, though.