Recherche avancée

Médias (0)

Mot : - Tags -/images

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (8902)

  • How to process and upload large video files directly to cloud with ffmpeg but without fragmented MP4 ?

    9 avril 2024, par volume one

    I am using ffmpeg via fluent-ffpmeg for Node.js to process videos uploaded by users.

    


    The problem I have is if a user uploades a huge movie file, say 8GB in size, then I don't want to store the file on the server as it will soon reach full capacity of space.

    


    I thought a way to tackle this was to stream the output from ffmpeg straight to cloud storage like AWS S3. The only way to do this (I believe) is using a PassThrough() stream :

    


    import PassThrough from 'node:stream' ;
import FFMpeg from 'fluent-ffmpeg' ;

    


    let PassThroughStream = new PassThrough() ;

    


             FFMpeg('/testvideo.mp4')
                .videoCodec('libx264')
                .audioCodec('libmp3lame')
                .size(`640x480`)
                // Stream input requires manually specifying input format
                .inputFormat('mp4')
                // Stream output requires manually specifying output formats
                .format('mp4')
                // Must be fragmented for stream to work. This causes duration problem.
                .outputOptions('-movflags dash')
                .pipe(PassThroughStream, {end: true})


    


    When the video is created using fragmented MP4, there is no duration associated with the file which means it has no length metadata. That makes playback difficult in a browser and is unacceptable :

    


    enter image description here

    


    The only way I have been able to get a proper length property set in the file's metadata is by not using fragmented MP4 (that is the -movflags dash part in the code above). By not using this, I cannot stream the output directly to cloud storage - I have to save the file somewhere locally first.

    


    I think I am missing something but don't know what. How could this be solved ? I want to process and write the output to AWS S3 without storing the file locally without creating a fragmented MP4.

    


  • Best way to pipe audio and video chunks from within python to ffmpeg

    8 mai 2016, par basilikum

    Problem

    I’m getting audio and video chunks from a third-party server and I would like to pipe those chunks to ffmpeg to create a WebM live stream according to these instructions :

    http://wiki.webmproject.org/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dash

    Here they are using input from webcam and microphone but I need to use the data chunks, so the ffmpeg command would look somehow like this :

       cmd = [
           "ffmpeg",
           "-f", "flv", "-i", "video.fifo",
           "-f", "s16le", "-ar", "16000", "-ac", "1", "-i", "audio.fifo",
           "-map", "0:0",
           "-pix_fmt", "yuv420p",
           "-c:v", "libvpx-vp9",
           "-s", "640x480", "-keyint_min", "40", "-g", "40", "-speed", "6",
           "-tile-columns", "4", "-frame-parallel", "1", "-threads", "8",
           "-static-thresh", "0", "-max-intra-rate", "300",
           "-deadline", "realtime", "-lag-in-frames", "0",
           "-error-resilient", "1",
           "-b:v", "3000k",
           "-f", "webm_chunk",
           "-header", self.video_header,
           "-chunk_start_index", "1",
           "video_360_%d.chk",
           "-map", "1:0",
           "-c:a", "libvorbis",
           "-b:a", "16k", "-ar", "16000",
           "-f", "webm_chunk",
           "-audio_chunk_duration", "2000",
           "-header", self.audio_header,
           "-chunk_start_index", "1",
           "audio_171_%d.chk"
       ]

    As you can see, I am using a "video.fifo" and "audio.fifo" file, because I thought it would be a good idea to pipe the chunks in via a named pipe, but I can’t get it to work. Here is what I’m doing :

    p = subprocess.Popen(cmd)
    fa = os.open("video.fifo", os.O_WRONLY)
    fv = os.open("audio.fifo", os.O_WRONLY)

    So I’m starting the subprocess first, so that it opens the fifo files for reading. After that, I should be able to open them for writing but I am not. More specifically, I am able to open the first one, but not the second one. So maybe that has something to do with how ffmpeg handles its inputs if there are more than one, but I just don’t know.

    Question

    How can I either solve the problem of non openable named pipes or how can I achieve what I wanted to achieve without named pipes.

  • Options for replacing RTMP for live streaming

    22 décembre 2016, par molokoV

    I have a backend streaming video to web browsers using RTMP. On the browsers we use jwplayer.
    As everybody know flash player is going to be deprecated soon.
    Im looking for options to modify the backend using another streaming solution.

    We have made some test using DASH but it has too much delay for live streamining compared to RTMP.

    What are the options for anyone using RTMP ?