Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (89)

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (11028)

  • Android : how to film a video before extracting its audio

    20 février 2017, par MrOrgon

    Despite many searches, I haven’t been able to develop a Android prototype able to film a video before extracting its audio as .wav in a separate activity.

    I have developed so far a simple filming activity which relies on Android’s Camera application. My strategty was to put the video’s Uri as Extra to the next activity, before using FFMPEG, but I can’t make the transition between Uri and FFMPEG. Indeed, I’m a fresh Android Studio beginner, so I still am not sure about what concept to use.

    Here’s my code for the video recording activity.

    import android.net.Uri;
    import android.os.Build;
    import android.os.Bundle;
    import android.provider.MediaStore;
    import android.widget.Toast;
    import android.widget.VideoView;

    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.nio.channels.FileChannel;

    import static java.security.AccessController.getContext;


    public class RecordActivity extends Activity{

    static final int REQUEST_VIDEO_CAPTURE = 0;

    VideoView mVideoView = null;
    Uri videoUri = null;

    @Override
    public void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       mVideoView = (VideoView) findViewById(R.id.videoVieww);
       setContentView(R.layout.activity_record);

       Intent takeVideoIntent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);

       Toast.makeText(RecordActivity.this,         String.valueOf(Build.VERSION.SDK_INT) , Toast.LENGTH_SHORT).show();

       takeVideoIntent.putExtra(MediaStore.EXTRA_OUTPUT, videoUri);
       if (takeVideoIntent.resolveActivity(getPackageManager()) != null) {
           startActivityForResult(takeVideoIntent, REQUEST_VIDEO_CAPTURE);
       }

    }


       @Override
       protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
           if (requestCode == REQUEST_VIDEO_CAPTURE && resultCode == RESULT_OK) {
               videoUri = intent.getData();

               Intent intentForFilterActivity = new Intent(RecordActivity.this, FilterActivity.class);
               intentForFilterActivity.putExtra("VideoToFilter", videoUri.getPath());
               startActivity(intentForFilterActivity);

           }
       }
    }

    Here’s the the code for the audio extraction activity. It is called "FilterActivity", as its final aim is to filter outdoor noise using additional functions. I’m using WritingMinds’ implementation of FFMPEG.
    https://github.com/WritingMinds/ffmpeg-android-java

    import android.app.Activity;
    import android.content.Intent;
    import android.os.Bundle;
    import android.test.ActivityUnitTestCase;
    import android.widget.Toast;

    import com.github.hiteshsondhi88.libffmpeg.ExecuteBinaryResponseHandler;
    import com.github.hiteshsondhi88.libffmpeg.FFmpeg;
    import  com.github.hiteshsondhi88.libffmpeg.exceptions.FFmpegCommandAlreadyRunningException;



    public class FilterActivity extends Activity {

    @Override
    public void onCreate(Bundle savedInstanceState) {

       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_filter);

       Intent intentVideo = getIntent();
       String pathIn = intentVideo.getStringExtra("VideoToFilter");

       FFmpeg ffmpeg = FFmpeg.getInstance(FilterActivity.this);
       try {
           String[] cmdExtract = {"-i " + pathIn + " extracted.wav"};
           ffmpeg.execute(cmdExtract, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {}

               @Override
               public void onProgress(String message) {}

               @Override
               public void onFailure(String message) {
                   Toast.makeText(FilterActivity.this, "Failure !", Toast.LENGTH_SHORT).show();
               }

               @Override
               public void onSuccess(String message) {}

               @Override
               public void onFinish() {}
           });
       } catch (FFmpegCommandAlreadyRunningException e) {
       }
    }


    }

    and I always get the "Failure !" message.

    Some parts of the code may look extremely bad. As as written previously, I’m a real Android Studio beginner.

    Do you have any correction that could work ? Or even just a strategy ?

    Thank you in advance !

  • What codec/format to use for fastest possible decoding ?

    2 mars 2017, par JT J

    I’m using an ffmpeg script (in Windows) that extracts all the keyframes from a video and pastes them into a folder. I’ve made sure that my drive speed, CPU, and RAM are not causing a bottleneck.

    The quality of the video is actually not important at all in this case. I need to encode the video that the script extracts frames from so that it has the fastest possible decoding speed. File size and quality are not important, only read speed. The video does not have audio. What would work best for me ?

    If it matters, here’s the script I’m working with :

    ffmpeg -i input.mp4 -vf "select=eq(pict_type\,I)" -vsync 1 %%3d.bmp

    Sorry if sound like I don’t know what I’m talking about, this is not a topic I am super familiar with. I appreciate your help !

  • Write audio packet to file using ffmpeg

    27 février 2017, par iamyz

    I am trying to write audio packet to file using ffmpeg. The source device sending the packet after some interval. e.g.

    First packet has a time stamp 00:00:00
    Second packet has a time stamp 00:00:00.5000000
    Third packet has a time stamp 00:00:01
    And so on...

    Means two packet per second.

    I want to encode those packets and write to a file.

    I am referring the Ffmpeg example from link Muxing.c

    While encoding and writing there is no error. But output file has only 2 sec audio duration and speed is also super fast.

    The video frames are proper according the settings.

    I think the problem is related to calculation of pts, dts and duration of packet.

    How should I calculate proper values for pts, dts and duration. Or is this problem related to other thing ?

    Code :

    void AudioWriter::WriteAudioChunk(IntPtr chunk, int lenght, TimeSpan timestamp)
    {
       int buffer_size = av_samples_get_buffer_size(NULL, outputStream->tmp_frame->channels, outputStream->tmp_frame->nb_samples,  outputStream->AudioStream->codec->sample_fmt, 0);

       uint8_t *audioData = reinterpret_cast(static_cast(chunk));
       int ret = avcodec_fill_audio_frame(outputStream->tmp_frame,outputStream->Channels, outputStream->AudioStream->codec->sample_fmt, audioData, buffer_size, 1);

       if (!ret)
          throw gcnew System::IO::IOException("A audio file was not opened yet.");

       write_audio_frame(outputStream->FormatContext, outputStream, audioData);
    }


    static int write_audio_frame(AVFormatContext *oc, AudioWriterData^ ost, uint8_t *audioData)
    {
          AVCodecContext *c;
          AVPacket pkt = { 0 };
          int ret;
          int got_packet;
          int dst_nb_samples;

          av_init_packet(&pkt);
          c = ost->AudioStream->codec;

          AVFrame *frame = ost->tmp_frame;

         if (frame)
         {
             dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP);
             if (dst_nb_samples != frame->nb_samples)
               throw gcnew Exception("dst_nb_samples != frame->nb_samples");

             ret = av_frame_make_writable(ost->AudioFrame);
             if (ret < 0)
                throw gcnew Exception("Unable to make writable.");

             ret = swr_convert(ost->swr_ctx, ost->AudioFrame->data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
             if (ret < 0)
               throw gcnew Exception("Unable to convert to destination format.");

             frame = ost->AudioFrame;

             AVRational timebase = { 1, c->sample_rate };
             frame->pts = av_rescale_q(ost->samples_count, timebase, c->time_base);
             ost->samples_count += dst_nb_samples;
         }

         ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
         if (ret < 0)
           throw gcnew Exception("Error encoding audio frame.");

         if (got_packet)
         {
           ret = write_frame(oc, &c->time_base, ost->AudioStream, &pkt);
           if (ret < 0)
               throw gcnew Exception("Audio is not written.");
         }
         else
            throw gcnew Exception("Audio packet encode failed.");

         return (ost->AudioFrame || got_packet) ? 0 : 1;
    }

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }