Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (29)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (4825)

  • ffmpeg screen blending mode with transparency

    17 novembre 2013, par gilad s

    I have a sequence of PNG files with transparency and a video- video1.mp4

    1. I'd like to convert the PNG files into a video with an alpha channel- video2.mp4

    2. I'd like to merge video2.mp4 with video1.mp4 while the transparent parts in video2 are

    not seen and the other parts are blended using screen blending mode with video1

    when I do :

    ffmpeg -i "%03d.png" -vf scale=640:480 -vcodec libx264 video2.mp4

    a video is created with a black color where the transperent parts are supposed to be (I'm not sure if it's really transparent - I tryied other encoders - all gave the same result).

    afterwards when I do :

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "blend=all_mode='screen'" output.mp4

    the videos are indeed merged but the video has a pink overlay ?!

  • Showing a video using C (not C++ or C#)

    1er octobre 2013, par user2833591

    I learned programming in C using the Tscope-library (which is incompatible with C++ and C#), so I'm completely stuck with C.

    The Tscope-library is used to program small psychological experiments, it allows for functions that generate random numbers or produce images on the screen. Not sure if it might be a problem, but Tscope does generate it's own 'window'.

    So I wanted my experiment to show videos (currently in .wmv-format, but that can be changed, no problem), but I don't know how to do so (neither in code nor concept).

    I have come across FFmpeg, but the longer I see its code, the more I worry it's not meant for C (as in, parts of the code appear completely unknown to me). Could someone please help me ? If FFmpeg is indeed the answer, could someone give a quick run-down of the idea behind how it works (I've seen something about frames being put together) ?

  • JNI crash when I split code in two functions

    13 septembre 2013, par Lescott

    I have a properly working native C function which I call from my java code. But when I split this code in two functions and sequentially call them both I got fatal error.

    //global variables
    AVFormatContext *pFormatCtx;
    AVFrame         *pFrame;
    AVFrame         *pFrameRGB;
    AVCodecContext  *pCodecCtx;
    AVCodec         *pCodec;
    uint8_t         *buffer;
    int             videoStream;
    struct SwsContext      *sws_ctx = NULL;
    int outWidth, outHeight;

    Working unsplitted function

    JNIEXPORT void JNICALL Java_foo(JNIEnv * env, jclass class) {
       av_register_all();
       const char* videoPath = "11.mp4";

       int             numBytes;
       AVDictionary    *optionsDict = NULL;

       pFrame = NULL;
       pFrameRGB = NULL;
       buffer = NULL;
       pCodec = NULL;
       pFormatCtx = NULL;

       // Open video file
       if(avformat_open_input(&pFormatCtx, videoPath, NULL, NULL)!=0)
               exit(1); // Couldn't open file


       // Retrieve stream information
       if(avformat_find_stream_info(pFormatCtx, NULL)<0)
               exit(1); // Couldn't find stream information

        av_dump_format(pFormatCtx, 0,videoPath, 0);


       // Find the first video stream
       videoStream=-1;
       int i;
       for(i=0; inb_streams; i++) {
               if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
                       videoStream=i;
                       break;
               }
       }

       if(videoStream==-1)
               exit(1); // Didn't find a video stream

       // Get a pointer to the codec context for the video stream
       pCodecCtx=pFormatCtx->streams[videoStream]->codec;


       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
       if(pCodec==NULL) {
               fprintf(stderr, "Unsupported codec!\n");
               exit(1); // Codec not found
       }

       // Open codec
       if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0)
               exit(1); // Could not open codec

       // Allocate video frame
       pFrame=avcodec_alloc_frame();

       // Allocate an AVFrame structure
       pFrameRGB=avcodec_alloc_frame();
       if(pFrameRGB==NULL)
               exit(1);

       outWidth = 128;
       outHeight = 128;

       // Determine required buffer size and allocate buffer
       numBytes=avpicture_get_size(PIX_FMT_RGB24, outWidth, outHeight);
       buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       sws_ctx = sws_getContext(
                               pCodecCtx->width,
                               pCodecCtx->height,
                               pCodecCtx->pix_fmt,
                               outWidth,
                               outHeight,
                               PIX_FMT_RGB24,
                               SWS_BILINEAR,
                               NULL,
                               NULL,
                               NULL
                       );

       // Assign appropriate parts of buffer to image planes in pFrameRGB
       // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
       // of AVPicture
       avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, outWidth, outHeight);
    }

    Failing splitted functions

    JNIEXPORT void JNICALL Java_foo1(JNIEnv * env, jclass class) {
       av_register_all();
    }

    JNIEXPORT void JNICALL Java_foo2(JNIEnv * env, jclass class) {
       //all lines of code from Java_foo exept the first
    }

    Java code

    System.loadLibrary("mylib");
    Mylib.foo1();
    Mylib.foo2(); //fatal error


    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    #  SIGSEGV (0xb) at pc=0x00007faab5012dc0, pid=15571, tid=140371352766208

    Any ideas ?