Recherche avancée

Médias (91)

Autres articles (15)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (2672)

  • Sending Blobs from a Chrome Extension to a Node.js Process without WebSockets [closed]

    29 octobre 2023, par Matrix 404

    Question :
I have a Puppeteer script that runs a Chrome extension, which opens a webpage. The extension records that tab and sends the recorded blobs to the main Node.js process using WebSockets. The main process then streams these blobs to an RTMP server.

    


    I'm looking for an alternative method to send blobs to the main process without using WebSockets. Additionally, I want to know if it's possible to stream these blobs directly from the browser using FFmpeg wasm.

    


    Details :

    


      

    1. My current setup : Puppeteer script -> Chrome extension (recording) -> WebSockets -> Node.js process -> RTMP server.

      


    2. 


    3. I'm exploring options to eliminate the use of WebSockets while maintaining the ability to send recorded blobs from the Chrome extension to the Node.js process efficiently.

      


    4. 


    5. Is it possible to use FFmpeg wasm to stream blobs directly from the browser to an RTMP server ? If so, how can this be achieved ?

      


    6. 


    


    Additional Information :

    


      

    • The technology stack I'm using includes Puppeteer, Chrome extension, Node.js, and FFmpeg.
    • 


    • Any code snippets, examples, or recommended libraries are greatly appreciated.
    • 


    


    Constraints :

    


      

    • Compatibility with modern browsers and reasonable performance are essential.
    • 


    • Ideally, the solution should work in a headless Chrome instance.
    • 


    


    Thank you for your assistance in finding an efficient solution to this problem !

    


      

    • The technology stack I'm using includes Puppeteer, Chrome extension, Node.js, and FFmpeg.
    • 


    • Any code snippets, examples, or recommended libraries are greatly appreciated.
    • 


    


  • ffmpeg's getBufferedImage stopped working

    24 mai 2016, par ken

    I wrote the following code and it worked fine. Then I had a computer crash... After the crash, I imported the back up but getBufferedImage now gives the error
    "The method getBufferedImage() is undefined for the type Frame" I have looked at Other examples of what I am trying to do (grab a frame from a video and save it as a buffered image) on Stackoverflow and I have made sure to import everything. Any assistance would be GREATLY appreciated... Just want this working again so I can continue working on it. Thanks in advance !

    import java.awt.Image;
    import java.awt.Image.*;

    import java.awt.image.BufferedImage;
    import java.awt.image.*;
    import java.io.File;
    import java.io.IOException;

    import javax.imageio.ImageIO;

    import org.bytedeco.javacv.FFmpegFrameGrabber;
    import org.bytedeco.javacv.FrameGrabber.Exception;

    public class FrameData
    {  
    int count = 0;
    int picWidth;
    int picHeight;

    BufferedImage img = null;

    //GET FRAME COUNT
    public int gf_count(int numofFrames, String fileLocationsent, String videoNamesent) throws IOException
    {        
       String fileLocation = fileLocationsent;
       String videoName = videoNamesent;
    //      
       int frameNums = numofFrames;
       int totFrames = 0;
       int num = 0;
       //BufferedImage[] frameArray = new BufferedImage[frameNums];
       System.out.println("Determining # of Frames in Video...  Please be patient.");
       FFmpegFrameGrabber grabbing = new FFmpegFrameGrabber(fileLocation + videoName);

           try {   grabbing.start(); }
           catch (Exception e) {   System.out.println("Unable to grab frames");  }

           double startTime = System.currentTimeMillis();  

           for(int i = 0 ; i < frameNums  ; i++)
           {
               num = i;
               try { img = grabbing.grab().getBufferedImage(); } //GETBUFFEREDIMAGE... THIS IS THE PROBLEM
               catch (NullPointerException | Exception e1)
               {   i = frameNums;  }
  • Merging\Joining three FFMPEG Commands (Drawtext / -filter_complex overlay / anullsrc=channel_layout)

    18 décembre 2020, par roar

    Currently I am using three different commands to create three mp4s only to delete the two "temporary" videos using this code.

    


    @ECHO OFF
ffmpeg -f lavfi -i color=size=1280x720:duration=5:rate=25:color=Black -vf "drawtext=fontfile='GothamRnd-Book.otf':line_spacing=15:fontsize=15:fontcolor=white:x=(w-text_w)/2:y=(h-text_h)/2:text=Stack Exchange" "out1.mp4"
ffmpeg -i "out1.mp4" -i logo.png -filter_complex "overlay=x=10:y=10" "out2.mp4"
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=48000 -i "out2.mp4" -c:v copy -c:a aac -shortest "out3.mp4"
del "out1.mp4"
del "out2.mp4"
pause


    


    The nearest I have come is moving the anullsrc=channel_layout into the -filter_complex but that results in a long encode that I dont really understand what it is going because if I ctrl-c to cancel the batch still creates out3.mp4 correctly.

    


    ffmpeg -f lavfi -i color=size=1280x720:duration=5:rate=25:color=Black -vf "drawtext=fontfile='GothamRnd-Book.otf':line_spacing=15:fontsize=15:fontcolor=white:x=(w-text_w)/2:y=(h-text_h)/2:text=Stack Exchange" "out1.mp4"
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=48000  -i "out1.mp4" -i logo.png -filter_complex "overlay=x=10:y=10" "out3.mp4"


    


    It seems like this could be streamlined to not create the temporary files.
But maybe this is the only way to do this. Thank you for any assistance and sorry if the answer is obvious.

    


    Rory