Recherche avancée

Médias (91)

Autres articles (39)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

Sur d’autres sites (6040)

  • Write audio packet to file using ffmpeg

    27 février 2017, par iamyz

    I am trying to write audio packet to file using ffmpeg. The source device sending the packet after some interval. e.g.

    First packet has a time stamp 00:00:00
    Second packet has a time stamp 00:00:00.5000000
    Third packet has a time stamp 00:00:01
    And so on...

    Means two packet per second.

    I want to encode those packets and write to a file.

    I am referring the Ffmpeg example from link Muxing.c

    While encoding and writing there is no error. But output file has only 2 sec audio duration and speed is also super fast.

    The video frames are proper according the settings.

    I think the problem is related to calculation of pts, dts and duration of packet.

    How should I calculate proper values for pts, dts and duration. Or is this problem related to other thing ?

    Code :

    void AudioWriter::WriteAudioChunk(IntPtr chunk, int lenght, TimeSpan timestamp)
    {
       int buffer_size = av_samples_get_buffer_size(NULL, outputStream->tmp_frame->channels, outputStream->tmp_frame->nb_samples,  outputStream->AudioStream->codec->sample_fmt, 0);

       uint8_t *audioData = reinterpret_cast(static_cast(chunk));
       int ret = avcodec_fill_audio_frame(outputStream->tmp_frame,outputStream->Channels, outputStream->AudioStream->codec->sample_fmt, audioData, buffer_size, 1);

       if (!ret)
          throw gcnew System::IO::IOException("A audio file was not opened yet.");

       write_audio_frame(outputStream->FormatContext, outputStream, audioData);
    }


    static int write_audio_frame(AVFormatContext *oc, AudioWriterData^ ost, uint8_t *audioData)
    {
          AVCodecContext *c;
          AVPacket pkt = { 0 };
          int ret;
          int got_packet;
          int dst_nb_samples;

          av_init_packet(&pkt);
          c = ost->AudioStream->codec;

          AVFrame *frame = ost->tmp_frame;

         if (frame)
         {
             dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP);
             if (dst_nb_samples != frame->nb_samples)
               throw gcnew Exception("dst_nb_samples != frame->nb_samples");

             ret = av_frame_make_writable(ost->AudioFrame);
             if (ret < 0)
                throw gcnew Exception("Unable to make writable.");

             ret = swr_convert(ost->swr_ctx, ost->AudioFrame->data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
             if (ret < 0)
               throw gcnew Exception("Unable to convert to destination format.");

             frame = ost->AudioFrame;

             AVRational timebase = { 1, c->sample_rate };
             frame->pts = av_rescale_q(ost->samples_count, timebase, c->time_base);
             ost->samples_count += dst_nb_samples;
         }

         ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
         if (ret < 0)
           throw gcnew Exception("Error encoding audio frame.");

         if (got_packet)
         {
           ret = write_frame(oc, &c->time_base, ost->AudioStream, &pkt);
           if (ret < 0)
               throw gcnew Exception("Audio is not written.");
         }
         else
            throw gcnew Exception("Audio packet encode failed.");

         return (ost->AudioFrame || got_packet) ? 0 : 1;
    }

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }
  • What codec/format to use for fastest possible decoding ?

    2 mars 2017, par JT J

    I’m using an ffmpeg script (in Windows) that extracts all the keyframes from a video and pastes them into a folder. I’ve made sure that my drive speed, CPU, and RAM are not causing a bottleneck.

    The quality of the video is actually not important at all in this case. I need to encode the video that the script extracts frames from so that it has the fastest possible decoding speed. File size and quality are not important, only read speed. The video does not have audio. What would work best for me ?

    If it matters, here’s the script I’m working with :

    ffmpeg -i input.mp4 -vf "select=eq(pict_type\,I)" -vsync 1 %%3d.bmp

    Sorry if sound like I don’t know what I’m talking about, this is not a topic I am super familiar with. I appreciate your help !

  • Android : how to film a video before extracting its audio

    20 février 2017, par MrOrgon

    Despite many searches, I haven’t been able to develop a Android prototype able to film a video before extracting its audio as .wav in a separate activity.

    I have developed so far a simple filming activity which relies on Android’s Camera application. My strategty was to put the video’s Uri as Extra to the next activity, before using FFMPEG, but I can’t make the transition between Uri and FFMPEG. Indeed, I’m a fresh Android Studio beginner, so I still am not sure about what concept to use.

    Here’s my code for the video recording activity.

    import android.net.Uri;
    import android.os.Build;
    import android.os.Bundle;
    import android.provider.MediaStore;
    import android.widget.Toast;
    import android.widget.VideoView;

    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.nio.channels.FileChannel;

    import static java.security.AccessController.getContext;


    public class RecordActivity extends Activity{

    static final int REQUEST_VIDEO_CAPTURE = 0;

    VideoView mVideoView = null;
    Uri videoUri = null;

    @Override
    public void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       mVideoView = (VideoView) findViewById(R.id.videoVieww);
       setContentView(R.layout.activity_record);

       Intent takeVideoIntent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);

       Toast.makeText(RecordActivity.this,         String.valueOf(Build.VERSION.SDK_INT) , Toast.LENGTH_SHORT).show();

       takeVideoIntent.putExtra(MediaStore.EXTRA_OUTPUT, videoUri);
       if (takeVideoIntent.resolveActivity(getPackageManager()) != null) {
           startActivityForResult(takeVideoIntent, REQUEST_VIDEO_CAPTURE);
       }

    }


       @Override
       protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
           if (requestCode == REQUEST_VIDEO_CAPTURE && resultCode == RESULT_OK) {
               videoUri = intent.getData();

               Intent intentForFilterActivity = new Intent(RecordActivity.this, FilterActivity.class);
               intentForFilterActivity.putExtra("VideoToFilter", videoUri.getPath());
               startActivity(intentForFilterActivity);

           }
       }
    }

    Here’s the the code for the audio extraction activity. It is called "FilterActivity", as its final aim is to filter outdoor noise using additional functions. I’m using WritingMinds’ implementation of FFMPEG.
    https://github.com/WritingMinds/ffmpeg-android-java

    import android.app.Activity;
    import android.content.Intent;
    import android.os.Bundle;
    import android.test.ActivityUnitTestCase;
    import android.widget.Toast;

    import com.github.hiteshsondhi88.libffmpeg.ExecuteBinaryResponseHandler;
    import com.github.hiteshsondhi88.libffmpeg.FFmpeg;
    import  com.github.hiteshsondhi88.libffmpeg.exceptions.FFmpegCommandAlreadyRunningException;



    public class FilterActivity extends Activity {

    @Override
    public void onCreate(Bundle savedInstanceState) {

       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_filter);

       Intent intentVideo = getIntent();
       String pathIn = intentVideo.getStringExtra("VideoToFilter");

       FFmpeg ffmpeg = FFmpeg.getInstance(FilterActivity.this);
       try {
           String[] cmdExtract = {"-i " + pathIn + " extracted.wav"};
           ffmpeg.execute(cmdExtract, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {}

               @Override
               public void onProgress(String message) {}

               @Override
               public void onFailure(String message) {
                   Toast.makeText(FilterActivity.this, "Failure !", Toast.LENGTH_SHORT).show();
               }

               @Override
               public void onSuccess(String message) {}

               @Override
               public void onFinish() {}
           });
       } catch (FFmpegCommandAlreadyRunningException e) {
       }
    }


    }

    and I always get the "Failure !" message.

    Some parts of the code may look extremely bad. As as written previously, I’m a real Android Studio beginner.

    Do you have any correction that could work ? Or even just a strategy ?

    Thank you in advance !