Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (27)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5475)

  • overlay and combine ffmpeg at once

    3 juin 2015, par müngi

    Initial situation

    • video1, h264 (but the codec isn’t actually important I guess), duration 10
      seconds, non-transparent
    • video2, flv, duration 10 seconds, transparency

    Using video1 as "starting", "background" clip. then after second 5 i’d like to overlay video2 on it. Resulting in a 15 seconds clip.

    let me explain it more graphically

    video1: 111111111111111111
    video2:          222222222222222222
    result: 111111111333333333222222222

    Using the FFmpeg overlay filter like this

    ffmpeg -i K00187_KOMIKER_NIAVARANI_MICHAEL_01.mp4 -i alpha_vid.flv \
      -filter_complex "overlay" test.mp4
    

    Of course this just overlays both videos and stops right after video1 has ended.

  • Damaged h264 stream not working with ffmpeg but working with vlc or mplayer

    15 avril 2013, par gregoiregentil

    I have a h264 file, coming from an rtsp stream, that is slightly damaged. Some frames are altered.

    ffmpeg reports :

    ffmpeg -i stream.mpg
    ffmpeg version 0.8.6-4:0.8.6-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav developers
     built on Apr  2 2013 17:00:59 with gcc 4.6.3
    *** THIS PROGRAM IS DEPRECATED ***
    This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.

    Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 90000.00 (180000/2)
    Input #0, mpegts, from 'a.mpg':
     Duration: 00:03:18.84, start: 93370.745522, bitrate: 2121 kb/s
     Program 1
       Stream #0.0[0x44](): Video: h264 (Baseline), yuv420p, 640x480, 90k tbr, 90k tbn, 180k tbc
    At least one output file must be specified

    I can play the file with VLC or mplayer. Obviously, the damaged frames are "kind of blurred" but it's working. mplayer reports :

    mplayer stream.mpg
    MPlayer2 UNKNOWN (C) 2000-2011 MPlayer Team
    mplayer: could not connect to socket
    mplayer: No such file or directory
    Failed to open LIRC support. You will not be able to use your remote control.

    Playing stream.mpg.
    Detected file format: MPEG-2 transport stream format (libavformat)
    [lavf] stream 0: video (h264), -vid 0
    LAVF: Program 1
    VIDEO:  [H264]  640x480  0bpp  90000.000 fps    0.0 kbps ( 0.0 kbyte/s)
    Load subtitles in .
    [ass] auto-open
    ==========================================================================
    Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
    Asking decoder to use 2 threads if supported.
    Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264)
    ==========================================================================
    Audio: no sound
    Starting playback...
    V:   0.0   0/  0 ??% ??% ??,?% 0 0
    Movie-Aspect is undefined - no prescaling applied.
    VO: [xv] 640x480 => 640x480 Planar YV12
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!
    Video pts after filters MISSING
    V:93370.7   0/  0 ??% ??% ??,?% 0 0
    No pts value from demuxer to use for frame!

    When I try to re-encode the file with :

    ffmpeg -i stream.mpg -fflags +genpts -an -vcodec mpeg4 -r 65535/2733 stream.mp4

    ffmpeg seems to jump over the altered frames. The length of stream.mp4 << length of stream.mpg

    How could I fix this problem, i.e. having ffmpeg to output something similar to what mplayer and vlc output ?

  • need help configuring ffmpeg to decode raw AAC with android ndk

    24 octobre 2016, par Matt Wolfe

    I’ve got an android app that gets raw AAC bytes from an external device and I want to decode that data but I can’t seem to get the decoder to work, yet ffmpeg seems to work fine for decoding an mp4 file that contains the same audio data (verified with isoviewer). Recently I was able to get this ffmpeg library on android to decode video frames from the same external device but audio won’t seem to work.

    Here is the ffmpeg output for the file with the same data :

    $ ffmpeg -i Video_2000-01-01_0411.mp4
    ffmpeg version 2.6.1 Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.6.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 26.100 / 56. 26.100
     libavformat    56. 25.101 / 56. 25.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 11.102 /  5. 11.102
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'AXON_Flex_Video_2000-01-01_0411.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 1
       compatible_brands: isom3gp43gp5
     Duration: 00:00:15.73, start: 0.000000, bitrate: 1134 kb/s
       Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 40 kb/s (default)
       Metadata:
         handler_name    : soun
       Stream #0:1(eng): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 1087 kb/s, 29.32 fps, 26.58 tbr, 90k tbn, 1k tbc (default)
       Metadata:
         handler_name    : vide

    Here is my ndk code for setting up and decoding the audio :

    jint ffmpeg_init(JNIEnv * env, jobject this) {
       audioCodec = avcodec_find_decoder(AV_CODEC_ID_AAC);
       if (!audioCodec) {
           LOGE("audio codec %d not found", AV_CODEC_ID_AAC);
           return -1;
       }

       audioContext = avcodec_alloc_context3(audioCodec);
       if (!audioContext) {
           LOGE("Could not allocate codec context");
           return -1;
       }

        int openRet = avcodec_open2(audioContext, audioCodec, NULL);
           if (openRet &lt; 0) {
             LOGE("Could not open codec, error:%d", openRet);
             return -1;
           }

       audioContext->sample_rate = 8000;
       audioContext->channel_layout = AV_CH_LAYOUT_MONO;
       audioContext->profile = FF_PROFILE_AAC_LOW;
       audioContext->bit_rate = 48 * 1024;
       audioContext->sample_fmt = AV_SAMPLE_FMT_FLTP;

     //  unsigned char extradata[] = {0x15, 0x88};
     //  audioContext->extradata = extradata;
     //  audioContext->extradata_size = sizeof(extradata);
       audioFrame = av_frame_alloc();
       if (!audioFrame) {
           LOGE("Could not create audio frame");
           return -1;
       }
    }


    jint ffmpeg_decodeAudio(JNIEnv *env, jobject this, jbyteArray aacData, jbyteArray output, int offset, int len) {

       LOGI("ffmpeg_decodeAudio()");
       char errbuf[128];
       AVPacket avpkt = {0};
       av_init_packet(&amp;avpkt);
       LOGI("av_init_packet()");
       int error, got_frame;    
       uint8_t* buffer = (uint8_t *) (*env)->GetByteArrayElements(env, aacData,0);
       uint8_t* copy = av_malloc(len);  
       memcpy(copy, &amp;buffer[offset], len);
       av_packet_from_data(&amp;avpkt, copy, len);


       if ((error = avcodec_decode_audio4(audioContext, audioFrame, &amp;got_frame, &amp;avpkt)) &lt; 0) {
           ffmpeg_log_error(error);
           av_free_packet(&amp;avpkt);
           return error;
       }
       if (got_frame) {
           LOGE("Copying audioFrame->extended_data to output jbytearray, linesize[0]:%d", audioFrame->linesize[0]);
           (*env)->SetByteArrayRegion(env, output, 0, audioFrame->linesize[0],  *audioFrame->extended_data);
       }

       return 0;

    }

    As you can see I’ve got an init function that opens the decoder and creates the context, these things all work fine, without error. However when I call avcodec_decode_audio4 I get an error :

    FFMPEG error : -1094995529, Invalid data found when processing input

    I’ve tried all sorts of combinations of AVCodecContext properties. I’m not sure which I need to set for the decoder to do it’s job but from reading online I should just need to set the channel layout and the sample_rate (which I’ve tried by themself). I’ve also tried setting the extradata/extradata_size parameters to that which should match the video settings per : http://wiki.multimedia.cx/index.php?title=MPEG-4_Audio
    But no luck.

    Since the device we’re getting packets from sends aac data that have no sound at the beginning (but are valid packets), I’ve tried to just send those since they definitely should decode correctly.

    Here is an example of the initial audio packets that are of silence :

    010c9eb43f21f90fc87e46fff10a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5a5dffe214b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4b4bbd1c429696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696969696978

    Note that the data shown above is just a hex encoding of the data that I’m putting in AVPacket, and it was sent from an external device to the android application. My application doesn’t have direct access to the file though so I need to decode the raw frames/samples as I get them. When I look at the audio track data in isoviewer I can see that the audio track’s first sample is the same data as what I got from the device that contained that file (thus, the external device is just sending me the sample’s raw data). I believe this data can be derived from reading stsz (sample size) box starting at stco (chunk offset) boxes from the mdat box of the file.

    Also, isoviewer shows the esds box as having the following :

    ESDescriptor{esId=0, streamDependenceFlag=0, URLFlag=0, oCRstreamFlag=0, streamPriority=0, URLLength=0, URLString='null', remoteODFlag=0, dependsOnEsId=0, oCREsId=0, decoderConfigDescriptor=DecoderConfigDescriptor{objectTypeIndication=64, streamType=5, upStream=0, bufferSizeDB=513, maxBitRate=32000, avgBitRate=32000, decoderSpecificInfo=null, audioSpecificInfo=AudioSpecificConfig{configBytes=1588, audioObjectType=2 (AAC LC), samplingFrequencyIndex=11 (8000), samplingFrequency=0, channelConfiguration=1, syncExtensionType=0, frameLengthFlag=0, dependsOnCoreCoder=0, coreCoderDelay=0, extensionFlag=0, layerNr=0, numOfSubFrame=0, layer_length=0, aacSectionDataResilienceFlag=false, aacScalefactorDataResilienceFlag=false, aacSpectralDataResilienceFlag=false, extensionFlag3=0}, configDescriptorDeadBytes=, profileLevelIndicationDescriptors=[[]]}, slConfigDescriptor=SLConfigDescriptor{predefined=2}}

    And the binary is this :

    00 00 00 30 65 73 64 73 00 00 00 00 03 80 80 80
    1f 00 00 00 04 80 80 80 14 40 15 00 02 01 00 00
    7d 00 00 00 7d 00 05 80 80 80 02 15 88 06 01 02