Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (67)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (15253)

  • How to use ffmpeg in JavaScript to decode H.264 frames into RGB frames

    17 juin 2020, par noel

    I'm trying to compile ffmpeg into javascript so that I can decode H.264 video streams using node. The streams are H.264 frames packed into RTP NALUs so any solution has to be able to accept H.264 frames rather than a whole file name. These frames can't be in a container like MP4 or AVI because then the demuxer needs to needs the timestamp of every frame before demuxing can occur, but I'm dealing with a real time stream, no containers.

    



    Streaming H.264 over RTP

    



    Below is the basic code I'm using to listen on a udp socket. Inside the 'message' callback the data packet is an RTP datagram. The data portion of the data gram is an H.264 frame (P-frames and I-frames).

    



    var PORT = 33333;
var HOST = '127.0.0.1';

var dgram = require('dgram');
var server = dgram.createSocket('udp4');

server.on('listening', function () {
    var address = server.address();
    console.log('UDP Server listening on ' + address.address + ":" + address.port);
});

server.on('message', function (message, remote) {
    console.log(remote.address + ':' + remote.port +' - ' + message);
    frame = parse_rtp(message);

    rgb_frame = some_library.decode_h264(frame); // This is what I need.

});

server.bind(PORT, HOST);  


    



    I found the Broadway.js library, but I couldn't get it working and it doesn't handle P-frames which I need. I also found ffmpeg.js, but could get that to work and it needs a whole file not a stream. Likewise, fluent-ffmpeg doesn't appear to support file streams ; all of the examples show a filename being passed to the constructor. So I decided to write my own API.

    



    My current solution attempt

    



    I have been able to compile ffmpeg into one big js file, but I can't use it like that. I want to write an API around ffmpeg and then expose those functions to JS. So it seems to me like I need to do the following :

    



      

    1. Compile ffmpeg components (avcodec, avutil, etc.) into llvm bitcode.
    2. 


    3. Write a C wrapper that exposes the decoding functionality and uses EMSCRIPTEN_KEEPALIVE.
    4. 


    5. Use emcc to compile the wrapper and link it to the bitcode created in step 1.
    6. 


    



    I found WASM+ffmpeg, but it's in Chinese and some of the steps aren't clear. In particular there is this step :

    



    emcc web.c process.c ../lib/libavformat.bc ../lib/libavcodec.bc ../lib/libswscale.bc ../lib/libswresample.bc ../lib/libavutil.bc \


    



     :( Where I think I'm stuck

    



    I don't understand how all the ffmpeg components get compiled into separate *.bc files. I followed the emmake commands in that article and I end up with one big .bc file.

    



    2 questions

    



    1. Does anyone know the steps to compile ffmpeg using emscripten so that I can expose some API to javascript ?
    
 2. Is there a better way (with decent documentation/examples) to decode h264 video streams using node ?

    


  • How to encode specific metadata version in FFMPEG ?

    11 février 2021, par Charlie Britton

    I am batch converting lots of songs into shorter "Advert" songs for SHOUTcast and to be recognised as adverts by the server. The song must have ":Advert" for both the title and the artist metadata tags. When I use the following command :

    



    ffmpeg -i "$i" -c copy -vn -map_metadata -1 -metadata title=":Advert" -metadata artist=":Advert" -t 120 "adverts/ADVERT_$i"


    



    I would expect it to output the song with only ":Advert" as title and artist metadata but when I import it into the radio playout software (using ID3 1.x tagging) the metadata has not copied across and is therefore lost. Output from ffmpeg :

    



    ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
  built with Apple LLVM version 9.0.0 (clang-900.0.37)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/3.0.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --disable-lzma --enable-vda
  libavutil      55. 17.103 / 55. 17.103
  libavcodec     57. 24.102 / 57. 24.102
  libavformat    57. 25.100 / 57. 25.100
  libavdevice    57.  0.101 / 57.  0.101
  libavfilter     6. 31.100 /  6. 31.100
  libavresample   3.  0.  0 /  3.  0.  0
  libswscale      4.  0.100 /  4.  0.100
  libswresample   2.  0.101 /  2.  0.101
  libpostproc    54.  0.100 / 54.  0.100
[mp3 @ 0x7feba6800000] Skipping 0 bytes of junk at 230934.
[mjpeg @ 0x7feba7000600] Changing bps to 8
Input #0, mp3, from 'Joakim Karud - Vibe With Me.mp3':
  Metadata:
    major_brand     : dash
    minor_version   : 0
    compatible_brands: iso6mp41
    encoder         : Lavf56.40.101
    artist          : Joakim Karud
    title           : Vibe With Me
  Duration: 00:02:53.06, start: 0.025056, bitrate: 138 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 128 kb/s
    Metadata:
      encoder         : Lavc56.60
    Stream #0:1: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 1:1 DAR 16:9], 90k tbr, 90k tbn, 90k tbc
    Metadata:
      comment         : Cover (front)
Output #0, mp3, to 'adverts/ADVERT_Joakim Karud - Vibe With Me.mp3':
  Metadata:
    TIT2            : :Advert
    TPE1            : :Advert
    TSSE            : Lavf57.25.100
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, 128 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
size=    1876kB time=00:02:00.00 bitrate= 128.1kbits/s speed=1.44e+03x
video:0kB audio:1876kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.024837%


    



    I believe this is happening because the tag names are different (e.g. title should be title, but is TIT2 when output. Please could someone specify how I could ensure that the metadata is encoded in the ID3 1.x format so that it is readable by the radio playout software. Many thanks.

    


  • Am I missing a timeout param in FFMPEG ?

    5 mai 2020, par Dave Stein

    I'm running an ffmpeg command like this :

    



    ffmpeg -loglevel quiet -report -timelimit 15 -timeout 10 -protocol_whitelist file,http,https,tcp,tls,crypto -i ${inputFile} -vframes 1 ${outputFile} -y

    



    This is running in an AWS Lambda function. My Lambda timeout is at 30 seconds. For some reason I am getting "Task timed out" messages still. I should note I log before and after the command, so I know it's timing out during this task.

    



    Update

    



    In terms of the entire lambda execution I do the following :

    



      

    • Invoke a lambda to get an access token. This lambda makes on API request. It has a timeout of 5 seconds. The max time was 660MS for one request.

    • 


    • Make another API request to verify data. The max time was 1.6 seconds.

    • 


    • Run FFMPEG

    • 


    



    timelimit is supposed to Exit after ffmpeg has been running for duration seconds in CPU user time.. Theoretically this shouldn't run more than 15 seconds then, plus maybe 2-3 more before the other requests.

    



    timeout is probably superfluous here. There were a lot of definitions for it in the manual, but I think that was mainly waiting on input ? Either way, I'd think timelimit would cover my bases.

    



    Update 2

    



    I checked my debug log and saw this :

    



    Reading option '-timelimit' ... matched as option 'timelimit' (set max runtime in seconds) with argument '15'.
Reading option '-timeout' ... matched as AVOption 'timeout' with argument '10'.


    



    Seems both options are supported by my build

    



    Update 2

    



    I have updated my code with a lot of logs. I definitively see the FFMPEG command as the last thing that executes, before stalling out for the 30 second timeout

    



    Update 3
I can reproduce the behavior by pointing at a track instead of full manifest. I have set the command to this :

    



    ffmpeg -loglevel debug -timelimit 5 -timeout 5  -i 'https://streamprod-eastus-streamprodeastus-usea.streaming.media.azure.net/0c495135-95fa-48ec-a258-4ba40262e1be/23ab167b-9fec-439e-b447-d355ff5705df.ism/QualityLevels(200000)/Manifest(video,format=m3u8-aapl)' -vframes 1 temp.jpg -y

    



    A few things here :

    



      

    1. I typically point at the actual manifest (not the track), and things usually run much faster
    2. 


    3. I have lowered the timelimit and timeout to 5. Despite this, when i run a timer, the command runs for 15 seconds every time. It outputs a bunch of errors, likely due to this being track rather than full manifest, and then spits out the desired image.
    4. 


    



    The full output is at https://gist.github.com/DaveStein/b3803f925d64dd96cd45ae9db5e5a4d0