Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (35)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6283)

  • Broadcast mjpeg stream via websocket using ffmpeg and Python Tornado

    25 février 2016, par Asampaiz

    Well, i have been strugling for weeks now. Searching and reading a hundred of pages and nearly surrender.

    I need your help, this is the story : I want to stream my Logitech C930e webcam (connected to Raspi 2) to web browser. I have tried so many different way, such as using ffserver to pass the stream from ffmpeg to the web browser, but all of that is using same basic, it’s all need a re-encoding. ffserver will always re-encode the stream that passed by ffmpeg, no matter it is already on the right format or not. My webcam have built-in video encoding to mjpeg until 1080p, so that is the reason why i use this webcam, i don’t want using all of Raspi 2 resource just for encoding those stream.

    This approach end up in eating all my Raspi 2 Resources.

    Logitech C930e ---mjpeg 720p (compressed) instead of rawvideo---> ffmjpeg (copy, no reencoding) —http—> ffserver(mjpeg, reencoding to mjpeg ;this is the problem) —http—> Firefox

    My new approach

    Logitech C930e ---mjpeg 720p (compressed) instead of rawvideo---> ffmjpeg (copy, no reencoding —pipe—> Python3 (using tornado as the web framework) —websocket—> Firefox

    The problem of the new approach

    The problem is i can not make sure the stream format that passed by ffmpeg via pipe to Python is ready | compatible to be streamed to browser via websocket. I mean i already do all these step above but the result is unreadable image shown in the browser (like TV lost signal).

    1. I need help figuring out how to feed python the right mjpeg stream format with ffmpeg
    2. I need help on the client side (javascript) how to show the binary message that sent via websocket (the mjpeg stream)

    This is my current script

    Executing ffmpeg in Python (pipe) - Server Side

    --- cut ---
           multiprocessing.Process.__init__(self)
           self.resultQ = resultQ
           self.taskQ = taskQ
           self.FFMPEG_BIN = "/home/pi/bin/ffmpeg"
           self.video_w = 1280
           self.video_h = 720
           self.video_res = '1280x720'
           self.webcam = '/dev/video0'
           self.frame_rate = '10'
           self.command = ''
           self.pipe = ''
           self.stdout = ''
           self.stderr = ''

       #Start the ffmpeg, this parameter need to be ajusted,
       #video format already tried rawvide, singlejpeg, mjpeg
       #mpjpeg, image2pipe
       #i need help here (to make sure the format is right for pipe)
       def camera_stream_start(self):
               self.command = [ self.FFMPEG_BIN,
                   '-loglevel', 'debug',
                   '-y',
                   '-f', 'v4l2',
                   '-input_format', 'mjpeg',
                   '-s', self.video_res,
                   '-r', self.frame_rate,
                   '-i', self.webcam,
                   '-c:v', 'copy',
                   '-an',
                   '-f', 'rawvideo',
                   #'-pix_fmts', 'rgb24',
                   '-']
               self.pipe = sp.Popen(self.command, stdin=sp.PIPE, stdout = sp.PIPE, shell=False)
               #return self.pipe

       #stop ffmpeg
       def camera_stream_stop(self):
           self.pipe.stdout.flush()
           self.pipe.terminate()
           self.pipe = ''
           #return self.pipe

       def run(self):
           #start stream
           self.camera_stream_start()
           logging.info("** Camera process started")
           while True:
               #get the stream from pipe,
               #this part is also need to be ajusted
               #i need help here
               #processing the stream read so it can be
               #send to browser via websocket
               stream = self.pipe.stdout.read(self.video_w*self.video_h*3)

               #reply format to main process
               #in main process, the data will be send over binary websocket
               #to client (self.write_message(data, binary=True))
               rpl = {
                   'task' : 'feed',
                   'is_binary': True,
                   'data' : stream
               }
               self.pipe.stdout.flush()
               self.resultQ.put(rpl)
               #add some wait
               time.sleep(0.01)
           self.camera_stream_stop()
           logging.info("** Camera process ended")

    ffmpeg output

    --- Cut ---    
    Successfully opened the file.
    Output #0, rawvideo, to 'pipe:':
     Metadata:
       encoder         : Lavf57.26.100
       Stream #0:0, 0, 1/10: Video: mjpeg, 1 reference frame, yuvj422p(center), 1280x720 (0x0), 1/10, q=2-31, -1 kb/s, 10 fps, 10 tbr, 10 tbn, 10 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    --- Cut ---    

    JavaScript websocket - on the client side

    --- Cut ---
    socket = new WebSocket(url, protocols || []);
    socket.binaryType = "arraybuffer";

    socket.onmessage = function (message) {
       //log.debug(message.data instanceof ArrayBuffer);
       //This is for the stream that sent via websocket
       if(message.data instanceof ArrayBuffer)
       {
           //I need help here
           //How can i process the binary stream
           //so its can be shown in the browser (img)
           var bytearray = new Uint8Array(message.data);
           var imageheight = 720;
           var imagewidth = 1280;

           var tempcanvas = document.createElement('canvas');
           tempcanvas.height = imageheight;
           tempcanvas.width = imagewidth;
           var tempcontext = tempcanvas.getContext('2d');

           var imgdata = tempcontext.getImageData(0,0,imagewidth,imageheight);

           var imgdatalen = imgdata.data.length;

           for(var i=8;i/this is for ordinary string that sent via websocket
       else{
           pushData = JSON.parse(message.data);
           console.log(pushData);
       }

    --- Cut ---

    Any help, feedback or anything is very appreciated. If something not clear please advice me.

  • FFMPEG : Video file to YUV conversion by binary ffmpeg and by code C++ give different results

    30 juin 2016, par Anny G

    Disclaimer : I have looked at the following question,
    FFMPEG : RGB to YUV conversion by binary ffmpeg and by code C++ give different results
    but it didn’t help and it is not applicable to me because I am not using SwsContext or anything.

    Following first few tutorials by http://dranger.com/ffmpeg/, I have created a simple program that reads a video, decodes it and then when the frame is decoded, it writes the raw yuv values to a file (no padding), using the data provided by AVFrame after we successfully decoded a frame. To be more specific, I write out arrays AVFrame->data[0], AVFrame->data[1] and AVFrame->data[2] to a file, i.e. I simply append Y values, then U values, then V values to a file. The file turns out to be of yuv422p format.

    When I convert the same original video to a raw yuv format using the ffmpeg(same version of ffmpeg) command line tool, the two yuv files are the same in size, but differ in content.

    FYI, I am able to play both of the yuv files using the yuv player, and they look identical as well.

    Here is the exact command I run to convert the original video to a yuv video using ffmpeg command line tool

    ~/bin/ffmpeg -i super-short-video.h264 -c:v rawvideo -pix_fmt yuv422p  "super-short-video-yuv422p.yuv"

    What causes this difference in bytes and can it be fixed ? Is there perhaps another way of converting the original video to a yuv using the ffmpeg tool but maybe I need to use different settings ?

    Ffmpeg output when I convert to a yuv format :

    ffmpeg version N-80002-g5afecff Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
     configuration: --prefix=/home/me/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/me/ffmpeg_build/include --extra-ldflags=-L/home/me/ffmpeg_build/lib --bindir=/home/me/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --extra-cflags=-pg --extra-ldflags=-pg --disable-stripping
     libavutil      55. 24.100 / 55. 24.100
     libavcodec     57. 42.100 / 57. 42.100
     libavformat    57. 36.100 / 57. 36.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 45.100 /  6. 45.100
     libswscale      4.  1.100 /  4.  1.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, h264, from 'super-short-video.h264':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn
    [rawvideo @ 0x24f6fc0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    Output #0, rawvideo, to 'super-short-video-yuv422p.yuv':
     Metadata:
       encoder         : Lavf57.36.100
       Stream #0:0: Video: rawvideo (Y42B / 0x42323459), yuv422p, 1280x720, q=2-31, 200 kb/s, 25 fps, 25 tbn
       Metadata:
         encoder         : Lavc57.42.100 rawvideo
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    frame=   50 fps=0.0 q=-0.0 Lsize=   90000kB time=00:00:02.00 bitrate=368640.0kbits/s speed=11.3x    
    video:90000kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
  • Compile ffmpeg and x264 with —enable-shared

    20 août 2016, par John Allard

    I am trying to compile ffmpeg with shared-library support, because when I compiled it statically it ended up making huge libraries, like libavcodec.a with 75MB in size. Any programs that I compiled against libavcodec was at least 50MB in size. This only happens on my RaspberryPi though, if I do

    ls -lsth $(which ffmpeg)

    on my RPI3 I get

    15M -rwxr-xr-x 1 pi pi 15M Jul 29 21:49 /home/pi/camiocam/cam/binaries/bin//ffmpeg

    While if I do the same on my macbook I get this output

    488 -r-xr-xr-x  1 john  admin   242K Jul 26 19:34 /usr/local/Cellar/ffmpeg/3.1.1/bin/ffmpeg

    To get shared library support I did the following

    # download and install libx264
    cd ~/ffmpeg_build
    git clone git://git.videolan.org/x264.git
    cd x264
    PATH="$HOME/bin:$PATH" ./configure --host=arm-unknown-linux-gnueabi  --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-shared --disable-opencl
    # PATH="$HOME/bin:$PATH" make -j 8
    # PATH="$HOME/bin:$PATH" make install
    # ldconfig

    and ...

    # to install ffmpeg

    cd ~/ffmpeg_sources
    wget http://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2
    tar xjvf ffmpeg-snapshot.tar.bz2
    cd ffmpeg
    PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --arch=armel --target-os=linux --extra-cflags="-I$HOME/ffmpeg_build/include" --enable-pic  --extra-ldflags="-L$HOME/ffmpeg_build/lib" --bindir="$HOME/bin" --enable-gpl --enable-libx264 --enable-nonfree --enable-shared
    PATH="$HOME/bin:$PATH" make -j 8

    Which gives this output

    License: nonfree and unredistributable
    Creating config.mak, config.h, and doc/config.texi...
    config.h is unchanged
    libavutil/avconfig.h is unchanged
    libavcodec/bsf_list.c is unchanged
    libavformat/protocol_list.c is unchanged
    CC      libavcodec/libx264.o
    POD     doc/ffprobe.pod
    POD     doc/ffmpeg-all.pod
    POD     doc/ffserver-all.pod
    POD     doc/ffmpeg.pod
    POD     doc/ffprobe-all.pod
    POD     doc/ffserver.pod
    MAN     doc/ffprobe.1
    MAN     doc/ffmpeg.1
    MAN     doc/ffserver.1
    MAN     doc/ffmpeg-all.1
    MAN     doc/ffprobe-all.1
    MAN     doc/ffserver-all.1
    GEN     libavcodec/libavcodec.ver
    LD      libavcodec/libavcodec.so.57
    AR      libavcodec/libavcodec.a
    LD      libavformat/libavformat.so.57
    LD      libavfilter/libavfilter.so.6
    LD      libavdevice/libavdevice.so.57
    library.mak:102: recipe for target 'libavdevice/libavdevice.so.57' failed
    make: *** [libavdevice/libavdevice.so.57] Killed
    make: *** Deleting file 'libavdevice/libavdevice.so.57'

    I’ve tried to add the --enable-pic flag to the ffmpeg configure command but that made no difference.