Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (71)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (6176)

  • How to use ffmpeg in JavaScript to decode H.264 frames into RGB frames

    17 juin 2020, par noel

    I'm trying to compile ffmpeg into javascript so that I can decode H.264 video streams using node. The streams are H.264 frames packed into RTP NALUs so any solution has to be able to accept H.264 frames rather than a whole file name. These frames can't be in a container like MP4 or AVI because then the demuxer needs to needs the timestamp of every frame before demuxing can occur, but I'm dealing with a real time stream, no containers.

    



    Streaming H.264 over RTP

    



    Below is the basic code I'm using to listen on a udp socket. Inside the 'message' callback the data packet is an RTP datagram. The data portion of the data gram is an H.264 frame (P-frames and I-frames).

    



    var PORT = 33333;
var HOST = '127.0.0.1';

var dgram = require('dgram');
var server = dgram.createSocket('udp4');

server.on('listening', function () {
    var address = server.address();
    console.log('UDP Server listening on ' + address.address + ":" + address.port);
});

server.on('message', function (message, remote) {
    console.log(remote.address + ':' + remote.port +' - ' + message);
    frame = parse_rtp(message);

    rgb_frame = some_library.decode_h264(frame); // This is what I need.

});

server.bind(PORT, HOST);  


    



    I found the Broadway.js library, but I couldn't get it working and it doesn't handle P-frames which I need. I also found ffmpeg.js, but could get that to work and it needs a whole file not a stream. Likewise, fluent-ffmpeg doesn't appear to support file streams ; all of the examples show a filename being passed to the constructor. So I decided to write my own API.

    



    My current solution attempt

    



    I have been able to compile ffmpeg into one big js file, but I can't use it like that. I want to write an API around ffmpeg and then expose those functions to JS. So it seems to me like I need to do the following :

    



      

    1. Compile ffmpeg components (avcodec, avutil, etc.) into llvm bitcode.
    2. 


    3. Write a C wrapper that exposes the decoding functionality and uses EMSCRIPTEN_KEEPALIVE.
    4. 


    5. Use emcc to compile the wrapper and link it to the bitcode created in step 1.
    6. 


    



    I found WASM+ffmpeg, but it's in Chinese and some of the steps aren't clear. In particular there is this step :

    



    emcc web.c process.c ../lib/libavformat.bc ../lib/libavcodec.bc ../lib/libswscale.bc ../lib/libswresample.bc ../lib/libavutil.bc \


    



     :( Where I think I'm stuck

    



    I don't understand how all the ffmpeg components get compiled into separate *.bc files. I followed the emmake commands in that article and I end up with one big .bc file.

    



    2 questions

    



    1. Does anyone know the steps to compile ffmpeg using emscripten so that I can expose some API to javascript ?
    
 2. Is there a better way (with decent documentation/examples) to decode h264 video streams using node ?

    


  • Anomalie #4526 (Nouveau) : Retirons les dates des copyrights

    15 juillet 2020, par Franck D

    Hello :)
    Ok, les dates n’ont plus de valeurs, donc, il faut faire quoi concernant les cartouches ?
    Je mets choix 1 :

    • SPIP, Systeme de publication pour l’internet
      *
    • Copyright (c)
    • Arnaud Martin, Antoine Pitrou, Philippe Riviere, Emmanuel Saint-James
      *
    • Ce programme est un logiciel libre distribue sous licence GNU/GPL.
    • Pour plus de details voir le fichier COPYING.txt ou l’aide en ligne.
      -----
      Choix 2 (accents et ©) ?
    • SPIP, Système de publication pour l’internet
      *
    • Copyright ©
    • Arnaud Martin, Antoine Pitrou, Philippe Riviere, Emmanuel Saint-James
      *
    • Ce programme est un logiciel libre distribue sous licence GNU/GPL.
    • Pour plus de détails voir le fichier COPYING.txt ou l’aide en ligne.

    Autre chose ????

  • How to properly pass an asset FileDescriptor to FFmpeg using JNI in Android

    6 janvier 2021, par William Seemann

    I'm trying to retrieve metadata in Android using FFmpeg, JNI and a Java FileDescriptor and it isn't' working. I know FFmpeg supports the pipe protocol so I'm trying to emmulate : "cat test.mp3 | ffmpeg i pipe:0" programmatically. I use the following code to get a FileDescriptor from an asset bundled with the Android application :

    



    FileDescriptor fd = getContext().getAssets().openFd("test.mp3").getFileDescriptor();
setDataSource(fd, 0, 0x7ffffffffffffffL); // native function, shown below


    



    Then, in my native (In C++) code I get the FileDescriptor by calling :

    



    static void wseemann_media_FFmpegMediaMetadataRetriever_setDataSource(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{
    //...

    int fd = jniGetFDFromFileDescriptor(env, fileDescriptor); // function contents show below

    //...
}

// function contents
static int jniGetFDFromFileDescriptor(JNIEnv * env, jobject fileDescriptor) {
    jint fd = -1;
    jclass fdClass = env->FindClass("java/io/FileDescriptor");

    if (fdClass != NULL) {
        jfieldID fdClassDescriptorFieldID = env->GetFieldID(fdClass, "descriptor", "I");
        if (fdClassDescriptorFieldID != NULL && fileDescriptor != NULL) {
            fd = env->GetIntField(fileDescriptor, fdClassDescriptorFieldID);
        }
    }

    return fd;
}


    



    I then pass the file descriptor pipe # (In C) to FFmpeg :

    



    char path[256] = "";&#xA;&#xA;FILE *file = fdopen(fd, "rb");&#xA;&#xA;if (file &amp;&amp; (fseek(file, offset, SEEK_SET) == 0)) {&#xA;    char str[20];&#xA;    sprintf(str, "pipe:%d", fd);&#xA;    strcat(path, str);&#xA;}&#xA;&#xA;State *state = av_mallocz(sizeof(State));&#xA;state->pFormatCtx = NULL;&#xA;&#xA;if (avformat_open_input(&amp;state->pFormatCtx, path, NULL, &amp;options) != 0) { // Note: path is in the format "pipe:<the fd="fd">"&#xA;    printf("Metadata could not be retrieved\n");&#xA;    *ps = NULL;&#xA;    return FAILURE;&#xA;}&#xA;&#xA;if (avformat_find_stream_info(state->pFormatCtx, NULL) &lt; 0) {&#xA;    printf("Metadata could not be retrieved\n");&#xA;    avformat_close_input(&amp;state->pFormatCtx);&#xA;    *ps = NULL;&#xA;    return FAILURE;&#xA;}&#xA;&#xA;// Find the first audio and video stream&#xA;for (i = 0; i &lt; state->pFormatCtx->nb_streams; i&#x2B;&#x2B;) {&#xA;    if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO &amp;&amp; video_index &lt; 0) {&#xA;        video_index = i;&#xA;    }&#xA;&#xA;    if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO &amp;&amp; audio_index &lt; 0) {&#xA;        audio_index = i;&#xA;    }&#xA;&#xA;    set_codec(state->pFormatCtx, i);&#xA;}&#xA;&#xA;if (audio_index >= 0) {&#xA;    stream_component_open(state, audio_index);&#xA;}&#xA;&#xA;if (video_index >= 0) {&#xA;    stream_component_open(state, video_index);&#xA;}&#xA;&#xA;printf("Found metadata\n");&#xA;AVDictionaryEntry *tag = NULL;&#xA;while ((tag = av_dict_get(state->pFormatCtx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX))) {&#xA;    printf("Key %s: \n", tag->key);&#xA;    printf("Value %s: \n", tag->value);&#xA;}&#xA;&#xA;*ps = state;&#xA;return SUCCESS;&#xA;</the>

    &#xA;&#xA;

    My issue is avformat_open_input doesn't fail but it also doesn't let me retrieve any metadata or frames, The same code works if I use a regular file URI (e.g file ://sdcard/test.mp3) as the path. What am I doing wrong ? Thanks in advance.

    &#xA;&#xA;

    Note : if you would like to look at all of the code I'm trying to solve the issue in order to provide this functionality for my library : FFmpegMediaMetadataRetriever.

    &#xA;