Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (94)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5465)

  • OpenCV to ffplay from named pipe (fifo)

    16 novembre 2016, par Betsalel Williamson

    I’ve been working on piping video from OpenCV in C++. I’ve tried to pipe the image after processing from OpenCV to a named pipe with the end goal of republishing the stream using a webserver either with VLC or a NodeJS server.

    Where I’m stuck is that the output from OpenCV doesn’t seem to be processing correctly. The video always has artifacts even though it should be the raw video.

    enter image description here

    int main(int argc, char** argv)
    {

       VideoCapture camera(argv[1]);

       float fps = 15;

       // VLC raw video
       printf("Run command:\n\ncat /tmp/myfifo | cvlc --demux=rawvideo --rawvid-fps=%4.2f --rawvid-width=%.0f --rawvid-height=%.0f  --rawvid-chroma=RV24 - --sout \"#transcode{vcodec=h264,vb=200,fps=30,width=320,height=240}:std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=:8081/stream.flv}\""
           ,fps
           ,camera.get(CV_CAP_PROP_FRAME_WIDTH)
           ,camera.get(CV_CAP_PROP_FRAME_HEIGHT)
           );


       // ffplay raw video
       printf("Run command:\n\ncat /tmp/myfifo | ffplay -f rawvideo -pixel_format bgr24 -video_size %.0fx%.0f -framerate %4.2f -i pipe:"
           ,camera.get(CV_CAP_PROP_FRAME_WIDTH)
           ,camera.get(CV_CAP_PROP_FRAME_HEIGHT)
           ,fps
           );

       int fd;
       int status;

       char const * myFIFO = "/tmp/myfifo";

       if ((status = mkfifo(myFIFO, 0666)) < 0) {
           // printf("Fifo mkfifo error: %s\n", strerror(errno));
           // exit(EXIT_FAILURE);
       } else {
           cout << "Made a named pipe at: " << myFIFO << endl;
       }

       cout << "\n\nHit any key to continue after running one of the previously listed commands..." << endl;
       cin.get();

       if ((fd = open(myFIFO,O_WRONLY|O_NONBLOCK)) < 0) {
           printf("Fifo open error: %s\n", strerror(errno));
           exit(EXIT_FAILURE);
       }  

       while (true)
       {
           if (waitKey(1) > 0)
           {
               break;    
           }

           Mat colorImage;
           camera >> colorImage;

           // method: named pipe as matrix writes data to the named pipe, but image has glitch
           size_t bytes = colorImage.total() * colorImage.elemSize();

           if (write(fd, colorImage.data, bytes) < 0) {
               printf("Error in write: %s \n", strerror(errno));
           }            
       }

       close(fd);

       exit(EXIT_SUCCESS);
    }
  • Live encoding with FFmpeg , decklink and pipe [duplicate]

    5 novembre 2016, par SKALIS

    This question already has an answer here :

    What is the best way to get live video input from a Blackmagic Decklink card and get it encoded with an external encoder (NvEncoder)

    I try :

    mkfifo output.yuv
    ffmpeg  -f decklink  -r 30000/1000 -pix_fmt uyvy422  -i "DeckLink Mini Recorder 4K@24"  output.yuv
    & NvEncoder -i output.yuv -pix_fmt yuv420p -bitrate 21M -fps30 output.ts

    Problem seems that the NvEncoder cannot take the uyvy422 format, so can I change that to yuv420p when capturing and before sending it trough the pipe ?

    The decklink card only can give me uyvy422

  • How to pipe ppm data into ffmpeg from blender frameserver with while loop in PowerShell

    14 octobre 2016, par Radium

    Blender 2.6 manual features this little sh script for encoding a video from the Blender frameserver via ffmpeg. It works great on Windows with Cygwin, but only without -hwaccel hardware acceleration flag.

    #!/bin/sh
    BLENDER=http://localhost:8080
    OUTPUT=/tmp/output.ogv
    eval `wget ${BLENDER}/info.txt -O - 2>/dev/null |
       while read key val ; do
           echo R_$key=$val  
       done`
    i=$R_start
    {
       while [ $i -le $R_end ] ; do
          wget ${BLENDER}/images/ppm/$i.ppm -O - 2>/dev/null
          i=$(($i+1))
       done
    } | ffmpeg -vcodec ppm -f image2pipe -r $R_rate -i pipe:0 -b 6000k -vcodec libtheora $OUTPUT
    wget ${BLENDER}/close.txt -O - 2>/dev/null >/dev/null

    I’d like to encode my videos from Blender’s in Windows with -hwaccel dxva2 which works with PowerShell. I’ve begun converting the script to PowerShell but I have run into one last problem. I am having difficulty replicating this part of the script in PowerShell.

    i=$R_start
    {
       while [ $i -le $R_end ] ; do
          wget ${BLENDER}/images/ppm/$i.ppm -O - 2>/dev/null
          i=$(($i+1))
       done
    } | ffmpeg -vcodec ppm -f image2pipe -r $R_rate -i pipe:0 -b 6000k -vcodec libtheora $OUTPUT

    Below is my conversion to PowerShell.

    echo "gathering data";
    $blender = "http://localhost:8080";
    $output = "C:\Users\joel\Desktop\output.mp4";
    $webobj = wget $blender"/info.txt";
    $lines = $webobj.Content -split('[\r\n]') | ? {$_};
    $info = @{};
    foreach ($line in $lines) {
       $lineinfo = $line -split('[\s]') | ? {$_};
       $info[$lineinfo[0]] = $lineinfo[1];
    }
    echo $info;
    [int]$end = [convert]::ToInt32($info['end'],10);
    [int]$i = [convert]::ToInt32($info['start'],10);
    $video="";
    ( while ($i -le $end) {
       $frame = wget $blender"/images/ppm/"$i".ppm" > $null;
       echo $frame.Content > $null;
       $i++;
    } ) | ffmpeg -hwaccel dxva2 -vcodec ppm -f image2pipe -r $info['rate'] -i pipe:0 -b 6000k -vcodec libx264 $output;

    This is the piece I’m having trouble with. I’m not quite sure what the proper syntax is to pipe the data into the ffmpeg command in the same way as the bash script above.

    ( while( $i -le $end ) {
       $frame = wget $blender"/images/ppm/"$i".ppm" > $null;
       echo $frame.Content > $null;
       $i++;
    } ) | ffmpeg -hwaccel dxva2 -vcodec ppm -f image2pipe -r $info['rate'] -i pipe:0 -b 6000k -vcodec libx264 $output;

    Here is the output :

    PS C :\Users\joel\Desktop> .\encode.ps1
    gathering data
    

    Name Value


    -----
    rate 30
    height 720
    ratescale 1
    end 57000
    width 1280
    start 1
    while : The term ’while’ is not recognized as the name of a cmdlet, function,
    script file, or operable program. Check the spelling of the name, or if a path
    was included, verify that the path is correct and try again.
    At C :\Users\joel\Desktop\encode.ps1:15 char:3
    + ( while ($i -le $end)

    + CategoryInfo : ObjectNotFound : (while:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException