Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (10134)

  • FFMPEG : Video file to YUV conversion by binary ffmpeg and by code C++ give different results

    30 juin 2016, par Anny G

    Disclaimer : I have looked at the following question,
    FFMPEG : RGB to YUV conversion by binary ffmpeg and by code C++ give different results
    but it didn’t help and it is not applicable to me because I am not using SwsContext or anything.

    Following first few tutorials by http://dranger.com/ffmpeg/, I have created a simple program that reads a video, decodes it and then when the frame is decoded, it writes the raw yuv values to a file (no padding), using the data provided by AVFrame after we successfully decoded a frame. To be more specific, I write out arrays AVFrame->data[0], AVFrame->data[1] and AVFrame->data[2] to a file, i.e. I simply append Y values, then U values, then V values to a file. The file turns out to be of yuv422p format.

    When I convert the same original video to a raw yuv format using the ffmpeg(same version of ffmpeg) command line tool, the two yuv files are the same in size, but differ in content.

    FYI, I am able to play both of the yuv files using the yuv player, and they look identical as well.

    Here is the exact command I run to convert the original video to a yuv video using ffmpeg command line tool

    ~/bin/ffmpeg -i super-short-video.h264 -c:v rawvideo -pix_fmt yuv422p  "super-short-video-yuv422p.yuv"

    What causes this difference in bytes and can it be fixed ? Is there perhaps another way of converting the original video to a yuv using the ffmpeg tool but maybe I need to use different settings ?

    Ffmpeg output when I convert to a yuv format :

    ffmpeg version N-80002-g5afecff Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
     configuration: --prefix=/home/me/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/me/ffmpeg_build/include --extra-ldflags=-L/home/me/ffmpeg_build/lib --bindir=/home/me/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --extra-cflags=-pg --extra-ldflags=-pg --disable-stripping
     libavutil      55. 24.100 / 55. 24.100
     libavcodec     57. 42.100 / 57. 42.100
     libavformat    57. 36.100 / 57. 36.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 45.100 /  6. 45.100
     libswscale      4.  1.100 /  4.  1.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, h264, from 'super-short-video.h264':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn
    [rawvideo @ 0x24f6fc0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    Output #0, rawvideo, to 'super-short-video-yuv422p.yuv':
     Metadata:
       encoder         : Lavf57.36.100
       Stream #0:0: Video: rawvideo (Y42B / 0x42323459), yuv422p, 1280x720, q=2-31, 200 kb/s, 25 fps, 25 tbn
       Metadata:
         encoder         : Lavc57.42.100 rawvideo
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    frame=   50 fps=0.0 q=-0.0 Lsize=   90000kB time=00:00:02.00 bitrate=368640.0kbits/s speed=11.3x    
    video:90000kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
  • FFMpeg not muxing audio and video

    10 août 2016, par Armen Apresyan

    I’m writing a simple application that is muxing the given mp4 and mp3 files and is extracting mp4 file as result.

    fFmpeg = FFmpeg.getInstance(this);
    cmd = new String[] {"-i", videoPath, "-i", audioPath, Environment.getExternalStorageDirectory() + "/output.mp4"};
                   fFmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
                       @Override
                       public void onStart() {
                           super.onStart();
                           Log.e(TAG, "Started");
                       }

                       @Override
                       public void onFailure(String message) {
                           super.onFailure(message);
                           Log.e(TAG, "failed: " + message);
                       }

                       @Override
                       public void onProgress(String message) {
                           super.onProgress(message);
                           Log.e(TAG, "progress: "+ message);
                       }

                       @Override
                       public void onFinish() {
                           super.onFinish();
                           Log.e(TAG, "finish");
                       }

                       @Override
                       public void onSuccess(String message) {
                           super.onSuccess(message);
                           Log.e(TAG, "success: " + message);
                       }
                   });

    Where videoPath and audioPath are paths to video and audio, for example storage/emulated/0/source.mp4. But I only get a copy of video file without my audio file attached. What is my mistake ?

  • Trying to fix VPlayer's seeking ability, need some guidance [Android FFmpeg]

    1er juin 2016, par vxh.viet

    I’m trying to fix the currently broken seeking ability of VPlayer which is a FFmpeg player for Android. Being a Java developer, C code looks like alien language to me so can only fix it using common logic (which could make any C veteran have a good laugh).

    The relevant file is player.c and I’ll try my best to point out the relevant modification.

    So the basic idea is because FFmpeg’s av_seek_frame is very inaccurate even with AVSEEK_FLAG_ANY so I’m trying to follow this suggestion to seek backward to the nearest keyframe and then decode to the frame I want. One addition note is since I want to seek based on millisecond while the said solution show the way to seek by frame which is potentially a source of problem.

    In the Player I add the following fields :

    struct Player{
    ....
    AVFrame *frame;
    int64_t current_time_stamp;
    };

    In the player_read_from_stream I modify the seeking part as :

    void * player_read_from_stream(void *data) {
       ...
       struct DecoderData *decoder_data = data;
       int stream_no = decoder_data->stream_no;
       AVCodecContext * ctx = player->input_codec_ctxs[stream_no];
       ...
       // seeking, start my stuff
       if(av_seek_frame(player->input_format_ctx, seek_input_stream_number, seek_target, AVSEEK_FLAG_BACKWARD) >= 0){
           //seek to key frame success, now need to read every frame from the key frame to our target time stamp


           while(player->current_time_stamp < seek_target){

               int frame_done;

               while (av_read_frame(player->input_format_ctx, &packet) >= 0) {
                   if (packet.stream_index == seek_input_stream_number) {

                       avcodec_decode_video2(ctx, player->frame, &frame_done, &packet);
                       LOGI(1,"testing_stuff ctx %d", *ctx);
                       if (frame_done) {

                           player->current_time_stamp = packet.dts;
                           LOGI(1,"testing_stuff current_time_stamp: %"PRId64, player->current_time_stamp);
                           av_free_packet(&packet);
                           return;
                       }
                   }
                   av_free_packet(&packet);
               }
           }


       }
       //end my stuff

       LOGI(3, "player_read_from_stream seeking success");

       int64_t current_time = av_gettime();
       player->start_time = current_time - player->seek_position;
       player->pause_time = current_time;        
    }

    And in player_alloc_frames I allocate the memory for my frame as :

    int player_alloc_frames(struct Player *player) {
       int capture_streams_no = player->caputre_streams_no;
       int stream_no;
       for (stream_no = 0; stream_no < capture_streams_no; ++stream_no) {
           player->input_frames[stream_no] = av_frame_alloc();

           //todo: test my stuff
           player->frame = av_frame_alloc();
           //end test

           if (player->input_frames[stream_no] == NULL) {
               return -ERROR_COULD_NOT_ALLOC_FRAME;
           }
       }
       return 0;
    }

    Currently it just keep crashing and being a typical Android NDK’s "feature", it just provide a super helpful stack trace :

    libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x40 in tid 2717 (FFmpegReadFromS)

    I very much appreciate if anyone could help me solve this problem. Thank you for your time.