Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (67)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (9707)

  • Xuggle - Concatenate two videos - Error - java.lang.RuntimeException : error -1094995529 decoding audio

    1er avril 2013, par user2232357

    I am using the Xuggle API to concatenate two MPEG videos (with Audio inbuilt in the MPEGs).
    I am referring to the https://code.google.com/p/xuggle/source/browse/trunk/java/xuggle-xuggler/src/com/xuggle/mediatool/demos/ConcatenateAudioAndVideo.java?r=929. (my both inputs and output are MPEGs).

    Getting the bellow error.

    14:06:50.139 [main] ERROR org.ffmpeg - [mp2 @ 0x7fd54693d000] incomplete frame
    java.lang.RuntimeException: error -1094995529 decoding audio
       at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:549)
       at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
       at com.tav.factory.video.XuggleMediaCreator.concatenateAllVideos(XuggleMediaCreator.java:271)
       at com.tav.factory.video.XuggleMediaCreator.main(XuggleMediaCreator.java:446)

    Can anyone help mw with this ??? Thanks in Advance..

    Here is the complete code.

    public String concatenateAllVideos(ArrayList<tavtexttoavrequest> list){
           String finalPath="";


           String sourceUrl1 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage2.mpeg";
           String sourceUrl2 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage3.mpeg";
           String destinationUrl = "/Users/SSID/WS/SampleTTS/page2/z_AV_TAVImage_Final23.mpeg";

           out.printf("transcode %s + %s -> %s\n", sourceUrl1, sourceUrl2,
             destinationUrl);

           //////////////////////////////////////////////////////////////////////
           //                                                                  //
           // NOTE: be sure that the audio and video parameters match those of //
           // your input media                                                 //
           //                                                                  //
           //////////////////////////////////////////////////////////////////////

           // video parameters

           final int videoStreamIndex = 0;
           final int videoStreamId = 0;
           final int width = 400;
           final int height = 400;

           // audio parameters

           final int audioStreamIndex = 1;
           final int audioStreamId = 0;
           final int channelCount = 1;
           final int sampleRate = 16000 ; // Hz 16000 44100;

           // create the first media reader

           IMediaReader reader1 = ToolFactory.makeReader(sourceUrl1);

           // create the second media reader

           IMediaReader reader2 = ToolFactory.makeReader(sourceUrl2);

           // create the media concatenator

           MediaConcatenator concatenator = new MediaConcatenator(audioStreamIndex,
             videoStreamIndex);

           // concatenator listens to both readers

           reader1.addListener(concatenator);
           reader2.addListener(concatenator);

           // create the media writer which listens to the concatenator

           IMediaWriter writer = ToolFactory.makeWriter(destinationUrl);
           concatenator.addListener(writer);

           // add the video stream

           writer.addVideoStream(videoStreamIndex, videoStreamId, width, height);

           // add the audio stream

           writer.addAudioStream(audioStreamIndex, audioStreamId, channelCount,sampleRate);

           // read packets from the first source file until done

           try {
               while (reader1.readPacket() == null)
                 ;
           } catch (Exception e) {
               // TODO Auto-generated catch block
               e.printStackTrace();
           }

           // read packets from the second source file until done

           try {
               while (reader2.readPacket() == null)
                 ;
           } catch (Exception e) {
               // TODO Auto-generated catch block
               e.printStackTrace();
           }

           // close the writer

           writer.close();


           return finalPath;
       }

       static class MediaConcatenator extends MediaToolAdapter
         {
           // the current offset

           private long mOffset = 0;

           // the next video timestamp

           private long mNextVideo = 0;

           // the next audio timestamp

           private long mNextAudio = 0;

           // the index of the audio stream

           private final int mAudoStreamIndex;

           // the index of the video stream

           private final int mVideoStreamIndex;

           /**
            * Create a concatenator.
            *
            * @param audioStreamIndex index of audio stream
            * @param videoStreamIndex index of video stream
            */

           public MediaConcatenator(int audioStreamIndex, int videoStreamIndex)
           {
             mAudoStreamIndex = audioStreamIndex;
             mVideoStreamIndex = videoStreamIndex;
           }

           public void onAudioSamples(IAudioSamplesEvent event)
           {
             IAudioSamples samples = event.getAudioSamples();

             // set the new time stamp to the original plus the offset established
             // for this media file

             long newTimeStamp = samples.getTimeStamp() + mOffset;

             // keep track of predicted time of the next audio samples, if the end
             // of the media file is encountered, then the offset will be adjusted
             // to this time.

             mNextAudio = samples.getNextPts();

             // set the new timestamp on audio samples

             samples.setTimeStamp(newTimeStamp);

             // create a new audio samples event with the one true audio stream
             // index

             super.onAudioSamples(new AudioSamplesEvent(this, samples,
               mAudoStreamIndex));
           }

           public void onVideoPicture(IVideoPictureEvent event)
           {
             IVideoPicture picture = event.getMediaData();
             long originalTimeStamp = picture.getTimeStamp();

             // set the new time stamp to the original plus the offset established
             // for this media file

             long newTimeStamp = originalTimeStamp + mOffset;

             // keep track of predicted time of the next video picture, if the end
             // of the media file is encountered, then the offset will be adjusted
             // to this this time.
             //
             // You&#39;ll note in the audio samples listener above we used
             // a method called getNextPts().  Video pictures don&#39;t have
             // a similar method because frame-rates can be variable, so
             // we don&#39;t now.  The minimum thing we do know though (since
             // all media containers require media to have monotonically
             // increasing time stamps), is that the next video timestamp
             // should be at least one tick ahead.  So, we fake it.

             mNextVideo = originalTimeStamp + 1;

             // set the new timestamp on video samples

             picture.setTimeStamp(newTimeStamp);

             // create a new video picture event with the one true video stream
             // index

             super.onVideoPicture(new VideoPictureEvent(this, picture,
               mVideoStreamIndex));
           }

           public void onClose(ICloseEvent event)
           {
             // update the offset by the larger of the next expected audio or video
             // frame time

             mOffset = Math.max(mNextVideo, mNextAudio);

             if (mNextAudio &lt; mNextVideo)
             {
               // In this case we know that there is more video in the
               // last file that we read than audio. Technically you
               // should pad the audio in the output file with enough
               // samples to fill that gap, as many media players (e.g.
               // Quicktime, Microsoft Media Player, MPlayer) actually
               // ignore audio time stamps and just play audio sequentially.
               // If you don&#39;t pad, in those players it may look like
               // audio and video is getting out of sync.

               // However kiddies, this is demo code, so that code
               // is left as an exercise for the readers. As a hint,
               // see the IAudioSamples.defaultPtsToSamples(...) methods.
             }
           }

           public void onAddStream(IAddStreamEvent event)
           {
             // overridden to ensure that add stream events are not passed down
             // the tool chain to the writer, which could cause problems
           }

           public void onOpen(IOpenEvent event)
           {
             // overridden to ensure that open events are not passed down the tool
             // chain to the writer, which could cause problems
           }

           public void onOpenCoder(IOpenCoderEvent event)
           {
             // overridden to ensure that open coder events are not passed down the
             // tool chain to the writer, which could cause problems
           }

           public void onCloseCoder(ICloseCoderEvent event)
           {
             // overridden to ensure that close coder events are not passed down the
             // tool chain to the writer, which could cause problems
           }
         }
    </tavtexttoavrequest>
  • PHP FFmpeg video aspect ratio problem

    18 mai 2013, par Herr K

    i compiled the new version of FFMPEG and the padding commands have been deprecated.
    As i try to get familiar with the new -vf pad= commands, i want to ask, how can i
    convert a video without changing it's aspect ratio.

    I've checked numerous solutions from stackoverflow, nothing seemed to work.
    Can someone, please post a working PHP example or cmd line. I would be VERY happy.

    Please note that the videos in question, could be 4:3 and also be 16:9

    Let's say, i convert a 16:9 video to 640x480 format. It will need some bars at
    the top and at the bottom. That is what i want to do.

    Thanks

  • php ffmpeg flv conversion error

    8 juillet 2013, par arjun

    I am trying to convert mp4 file to flv but i am getting 0 size flv files,

    can you please help me out on this

    array(45)  [0]=string(72) "FFmpeg version SVN-r26402, Copyright (c)
    2000-2011 the FFmpeg developers" [1]=string(74) " built on Jan 12
    2012 16:07:49 with gcc 4.1.2 20080704 (Red Hat 4.1.2-51)" [2]=>
    string(98) " configuration : —enable-libmp3lame —enable-gpl
    
    — enable-libvorbis —disable-mmx —enable-shared" [3]=string(35) " libavutil 50.36. 0 / 50.36. 0" [4]=string(35) " libavcore 0.16. 1 / 0.16. 1" [5]=string(37) " libavcodec 52.108. 0 / 52.108. 0" [6]=string(35) " libavformat 52.93. 0 / 52.93. 0" [7]=string(35) " libavdevice 52. 2. 3 / 52. 2. 3" [8]=string(35) " libavfilter 1.74. 0 / 1.74. 0" [9]=string(35) " libswscale 0.12. 0 / 0.12. 0" [10]=> string(0) "" [11]=string(101) "Seems stream 0 codec frame rate differs from container frame rate : 30000.00 (30000/1) -15.00 (15/1)" [12]=string(106) "Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '16112007069.mp4' :" [13]=string(11) " Metadata :" [14]=string(26) " major_brand : mp42" [15]=string(23) " minor_version : 0" [16]=string(35) " compatible_brands : mp423gp4isom" [17]=string(41) " creation_time : 2007-11-16 15:44:57" [18]=string(59) " Duration : 00:00:32.64, start : 0.000000, bitrate : 562 kb/s" [19]=string(115) " Stream #0.0(und) : Video : mpeg4, yuv420p, 352x288 [PAR 1:1 DAR 11:9], 512 kb/s, 15 fps, 15 tbr, 30k tbn, 30k tbc" [20]=string(13) " Metadata :" [21]=> string(43) " creation_time : 2007-11-16 15:44:57" [22]=string(62) " Stream #0.1(und) : Audio : aac, 16000 Hz, mono, s16, 48 kb/s" [23]=> string(13) " Metadata :" [24]=string(43) " creation_time : 2007-11-16 15:44:58" [25]=string(87) "WARNING : The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s" [26]=string(47) "[buffer @ 0x94a9de0] w:352 h:288 pixfmt:yuv420p" [27]=string(98) "[libmp3lame @ 0x94a9560] flv does not support that sample rate, choose from (44100, 22050, 11025)." [28]=string(85) "Output #0, flv, to '16112007069.flv' :" [29]=string(11) " Metadata :" [30]=string(26) " major_brand : mp42" [31]=string(23) " minor_version : 0" [32]=string(35) " compatible_brands : mp423gp4isom" [33]=string(41) " creation_time : 2007-11-16 15:44:57" [34]=string(33) " encoder : Lavf52.93.0" [35]=> string(103) " Stream #0.0(und) : Video : flv, yuv420p, 352x288 [PAR 1:1 DAR 11:9], q=2-31, 200 kb/s, 1k tbn, 15 tbc" [36]=string(13) " Metadata :" [37]=string(43) " creation_time : 2007-11-16 15:44:57" [38]=string(68) " Stream #0.1(und) : Audio : libmp3lame, 16000 Hz, mono, s16, 0 kb/s" [39]=string(13) " Metadata :" [40]=string(43) " creation_time : 2007-11-16 15:44:58" [41]=string(15) "Stream mapping :" [42]=string(21) " Stream #0.0 -#0.0" [43]=string(21) " Stream #0.1 -#0.1" [44]=string(72) "Could not write header for output file #0 (incorrect codec parameters ?)"
     

    ffmpeg : /usr/local/bin/ffmpeg file

    path : 16112007069.mp4

    srcAR : 16000

    srcAB:48

    dimention : 352 x 288 destFile : 16112007069.flv

    coding part : :)

    $srcFile = "/file destination/16112007069.mp4";  
    $destFile = "/file destination/16112007069.flv";
    $ffmpegPath = "/usr/local/bin/ffmpeg";

    $ffmpegObj = new ffmpeg_movie($srcFile);

    $srcWidth = makeMultipleTwo($ffmpegObj->getFrameWidth());  
    $srcHeight = makeMultipleTwo($ffmpegObj->getFrameHeight());
    $srcFPS = $ffmpegObj->getFrameRate();  
    $srcAB = intval($ffmpegObj->getAudioBitRate());  
    $srcAR => $ffmpegObj->getAudioSampleRate();  

    exec($ffmpegPath . " -i " . $srcFile . " -ar " . $srcAR . " -ab " . $srcAB . " -f flv -s " . $srcWidth . "x" . $srcHeight . " " . $destFile . " 2>&amp;1", $output);

    var_dump($output);

    print "<br />ffmpeg: $ffmpegPath<br />"; print "file path: $srcFile<br />"; print "srcAR: $srcAR<br />"; print "srcAB: $srcAB<br />"; print
    "dimention: $srcWidth x $srcHeight<br />"; print "destFile:
    $destFile<br />";


    function makeMultipleTwo ($value)  {
       $sType = gettype($value/2);
       if($sType == "integer")
       {
           return $value;
       } else {
           return ($value-1);
       }  
    }

    Note I added :

    if($srcAR&lt;41000){ $srcAR = "41000"; }

    Above and its working !