Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (26)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (4980)

  • Video encoding libraries for iOS

    7 novembre 2011, par peetonn

    I really stucked with that problem, because I haven't seen enough information in the internet regarding video encoding in iOS, however we can observe plenty of apps that deal with the problem of video streaming successfully (skype, qik, justin.tv, etc.)
    I'm going to develop an application, that should send video frames obtained from camera and encoded in h.263 (h.264 or MPEG-4 it is under decision) to a web-server. For this, I need some video encoding library. Obviously, ffmpeg can deal with that task, but it is under LGPL license, which could probably lead to some problems in submitting the app in the AppStore. On the other hand, there are some applications, which are seemed to use ffmpeg library, but only Timelapser clearly states this fact in app description. Does this mean, that other apps are not using ffmpeg or just hiding this information ?

    Please, share your thoughts and experience in this topic. I'm open for dicsussion.

  • Getting either incorrect output resolution or FPS from ffmpeg

    5 novembre 2011, par Adam

    I am capturing an RTSP stream from a security camera, and transcoding it for (live streaming) to iphone, using OSX as the encoding platform.
    I have it working correctly, and Im tuning it.
    However, it seems that it is not outputting the requested resolution. This is my script

    /Applications/SecurityCamera/openRTSP -v -c -t rtsp://10.0.1.118/ch1-s1 | \
       /Applications/SecurityCamera/ffmpeg \
       -r 10 -i - \
       -y -an -ab 64000 -f mpegts -vcodec copy -s 960x640 \
       -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 \
       -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16  -keyint_min 25 \
       -sc_threshold 40 -i_qfactor 0.71 -bt 400k -maxrate 524288 -bufsize 524288 \
       -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 \
       -aspect 960:640 -r 10 -g 10 -async 2 -\
       |/Applications/SecurityCamera/mediastreamsegmenter -b http://localhost:8080/\
         -f /Library/WebServer/Documents/ -i stream.m3u8 -t 10 -s 4 -D

    This is the status report :

    Input #0, h264, from 'pipe:':

     Duration: N/A, bitrate: N/A
     Stream #0.0: Video: h264, yuv420p, 1600x1200, 10 fps, 10 tbr, 1200k tbn, 20 tbc
     [mpegts @ 0x10100c200] muxrate VBR, pcr every 1 pkts, sdt every 200, pat/pmt every 40 pkts
     Output #0, mpegts, to 'pipe:':
     Metadata:
       encoder         : Lavf52.93.0
       Stream #0.0: Video: libx264, yuv420p, 1600x1200 [PAR 1:1 DAR 4:3], q=2-31, 90k tbn, 10 tbc
    Stream mapping:
     Stream #0.0 -> #0.0

    You can see that its working, but it is outputting 1600x1200 for some reason. (possibly -vcodec copy copies all codec parameters, not just the codec type ?)

    If I change the -vcodec copy to -vcodec libx264 then I get the correct status report (stating 960x640, correct), but the streaming switches to 2 FPS (why ? Im forcing both input and output !) and it halts after 54 frames (see output below)

    Seems stream 0 codec frame rate differs from container frame rate: 20.00 (20/1) -> 10.00 (20/2)
    Input #0, h264, from 'pipe:':
     Duration: N/A, bitrate: N/A
       Stream #0.0: Video: h264, yuv420p, 1600x1200, 10 fps, 10 tbr, 1200k tbn, 20 tbc
    [buffer @ 0x100d02420] w:1600 h:1200 pixfmt:yuv420p
    [scale @ 0x100d026f0] w:1600 h:1200 fmt:yuv420p -> w:960 h:640 fmt:yuv420p flags:0x4
    [libx264 @ 0x10100d400] using SAR=1/1
    [libx264 @ 0x10100d400] frame MB size (60x40) > level limit (1620)
    [libx264 @ 0x10100d400] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom
    [libx264 @ 0x10100d400] profile Constrained Baseline, level 3.0
    [mpegts @ 0x10100c200] muxrate VBR, pcr every 1 pkts, sdt every 200, pat/pmt every 40 pkts
    Output #0, mpegts, to 'pipe:':
     Metadata:
       encoder         : Lavf52.93.0
       Stream #0.0: Video: libx264, yuv420p, 960x640 [PAR 1:1 DAR 3:2], q=10-51, 200 kb/s, 90k tbn, 10 tbc
    Stream mapping:
     Stream #0.0 -> #0.0
    read pmap fffps=  3 q=37.0 size=      37kB time=0.10 bitrate=3008.0kbits/s    bits/s    
    video pid set at 100
    found sequence start
     next segment value 1026000
    written bytes 376 skipped 0
    frame=   54 fps=  2 q=-1.0 Lsize=     160kB time=5.40 bitrate= 242.0kbits/s    
    video:141kB audio:0kB global headers:0kB muxing overhead 12.872737%
    frame I:6     Avg QP:34.68  size: 23524
    [libx264 @ 0x10100d400] frame P:48    Avg QP:41.53  size:    75
    [libx264 @ 0x10100d400] mb I  I16..4: 63.9%  0.0% 36.1%
    [libx264 @ 0x10100d400] mb P  I16..4:  0.1%  0.0%  0.0%  P16..4:  0.8%  0.1%  0.0%  0.0%  0.0%    skip:99.0%
    [libx264 @ 0x10100d400] final ratefactor: 38.54
    [libx264 @ 0x10100d400] coded y,uvDC,uvAC intra: 57.7% 22.3% 2.0% inter: 0.0% 0.1% 0.0%
    [libx264 @ 0x10100d400] i16 v,h,dc,p: 23% 35% 27% 15%
    [libx264 @ 0x10100d400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 32% 16%  4%  3%  3%  7%  4%  8%
    [libx264 @ 0x10100d400] i8c dc,h,v,p: 83% 11%  5%  0%
    [libx264 @ 0x10100d400] kb/s:214.43
  • Streaming live video from ios

    12 février 2014, par John

    I have a need to stream video from the iPhone/iPad camera to a server. It looks like this will need to be done with AVCaptureSession but I don't know how to best architect this.

    I found this post :

    streaming video FROM an iPhone

    But it doesn't handle the "live" part, latency needs to be 2 or 3 seconds at most. Devices can be constrained to 4 or 4S capability if needed, and there is no requirement for HD, VGA is probably what we'll end up with. I assume any solution would use ffmpeg, I haven't found any more appropriate library.

    How is this best accomplished ?