
Recherche avancée
Médias (91)
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Lights in the Sky
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Head Down
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Echoplex
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Discipline
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Letting You
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (80)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (6674)
-
Join videos side by side
28 novembre 2020, par Sudipta BI need to join two webm videos side-by-side, I've been using the following ffmpeg command to join them :


ffmpeg -i /client.webm -i /client2.webm -filter_complex "[0:v][1:v]hstack=inputs=2[v]; [0:a][1:a]amerge[a]" -map "[v]" -map "[a]" -ac 2 /combined.webm



However, I have 2 issues


- 

- this doesn't seem to work properly if videos are of different dimensions or durations,
- moreover it seems extremely slow and resource-intensive.






I'm eager for any solutions, and I'd love to know if there are any better command-line alternatives.


-
Audio Video Mixing - Sync issue in Android with FFMPEG, Media Codec in different devices
24 novembre 2020, par khushbuI have already tried everything for Audio Video mixing and it's not working perfectly as in processing while mixing audio into the recorded video, sometimes the audio is ahead of video and vice-versa.


Using FFMPEG :


This is for add an Audio file to the Video file and generated the final Video where audio is replaced in the video.


val cmd ="-i $inputVideoPath -i ${inputAudio.absolutePath} -map 0:v -map 1:a -c:v copy -shortest ${outputVideo.absolutePath}"



After generating the final video, found some delay based on device performance so added delay in the below two cases :


1)Added delay in Audio if audio is ahead of the video.


val cmd = "-i ${tmpVideo.absolutePath} -itsoffset $hms -i ${tmpVideo.absolutePath} -map 0:v -map 1:a -c copy -preset veryfast ${createdVideo1?.absolutePath}"



2)Added delay in Video if the video is ahead of the audio.


val cmd = "-i ${tmpVideo.absolutePath} -itsoffset $hms -i ${tmpVideo.absolutePath} -map 1:v -map 0:a -c copy -preset veryfast ${createdVideo1?.absolutePath}"



NOTE : Here $hms is delay in 00:00:00.000 formate


but still, it's not working on all the devices like readmi, oneplus etc.


Using Media Codec :


Found some better performance in this solution but still not working on all the devices.


In this process, It's supporting .aac format so first if the user selected .mp3 formate than i have to convert it into .aac format using the below function :


fun Convert_Mp3_to_acc() {

 
 AndroidAudioConverter.load(requireActivity(), object : ILoadCallback {
 override fun onSuccess() {

 val callback: IConvertCallback = object : IConvertCallback {
 override fun onSuccess(convertedFile: File) {
 toggleLoader(false)
 audioLink = convertedFile.absolutePath
 append()
 

 }

 override fun onFailure(error: java.lang.Exception) {
 toggleLoader(false)
 Toast.makeText(requireActivity(), "" + error, Toast.LENGTH_SHORT).show()
 }
 }
 AndroidAudioConverter.with(requireActivity())
 .setFile(File(audioLink))
 .setFormat(AudioFormat.AAC)
 .setCallback(callback)
 .convert()
 }

 override fun onFailure(error: java.lang.Exception) {
 toggleLoader(false)
 }
 })
}



After successful conversion from .mp3 to .aac formate, It's extracting audio track and video track for merge


private fun append(): Boolean {

 val progressDialog = ProgressDialog(requireContext())
 Thread {
 requireActivity().runOnUiThread {
 progressDialog.setMessage("Please wait..")
 progressDialog.show()
 }
 val video_list = ArrayList<string>()
 for (i in videopaths.indices) {
 val file: File = File(videopaths.get(i))
 if (file.exists()) {
 val retriever = MediaMetadataRetriever()
 retriever.setDataSource(requireActivity(), Uri.fromFile(file))
 val hasVideo =
 retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_HAS_VIDEO)
 val isVideo = "yes" == hasVideo
 if (isVideo /*&& file.length() > 1000*/) {
 Log.d("resp", videopaths.get(i))
 video_list.add(videopaths.get(i))
 }
 }
 }
 try {
 val inMovies = arrayOfNulls<movie>(video_list.size)
 for (i in video_list.indices) {
 inMovies[i] = MovieCreator.build(video_list[i])
 }
 val videoTracks: MutableList<track> =
 LinkedList()
 val audioTracks: MutableList<track> =
 LinkedList()
 for (m in inMovies) {
 for (t in m!!.tracks) {
 if (t.handler == "soun") {
 audioTracks.add(t)
 }
 if (t.handler == "vide") {
 videoTracks.add(t)
 }
 }
 }
 val result = Movie()
 if (audioTracks.size > 0) {
 result.addTrack(AppendTrack(*audioTracks.toTypedArray()))
 }
 if (videoTracks.size > 0) {
 result.addTrack(AppendTrack(*videoTracks.toTypedArray()))
 }
 val out = DefaultMp4Builder().build(result)
 var outputFilePath: String? = null
 outputFilePath = Variables.outputfile

 /*if (audio != null) {
 Variables.outputfile
 } else {
 Variables.outputfile2
 }*/

 val fos = FileOutputStream(File(outputFilePath))
 out.writeContainer(fos.channel)
 fos.close()

 requireActivity().runOnUiThread {
 progressDialog.dismiss()

 Merge_withAudio()

 /* if (audio != null) else {
 //Go_To_preview_Activity()
 }*/
 }
 } catch (e: java.lang.Exception) {
 }
 }.start()

 return true
}
</track></track></movie></string>


This will add the selected audio with the recorded video


fun Merge_withAudio() {
 val root = Environment.getExternalStorageDirectory().toString()

 // Uri mediaPath = Uri.parse("android.resource://" + getPackageName() + "/" + R.raw.file_copy);
 //String audio_file =Variables.app_folder+Variables.SelectedAudio_AAC;

 //String filename = "android.resource://" + getPackageName() + "/raw/file_copy.aac";
 val audio_file: String = audioLink!!
 Log.e("Merge ", audio_file)
 val video = "$root/output.mp4"

 val bundle=Bundle()
 bundle.putString("FinalVideo", createdVideo?.absolutePath)

 val merge_video_audio = Merge_Video_Audio(this, bundle, object : AsyncResponse {
 override fun processFinish(output: Bundle?) {

 requireActivity().runOnUiThread {
 finalVideo = bundle.getString("FinalVideo")
 createdVideo = File(finalVideo)

 Log.e("Final Path ", finalVideo)

 createThumb {
 setUpExoPlayer()
 }
 }

 }
 })
 merge_video_audio.doInBackground(audio_file, video, createdVideo?.absolutePath)
}


 public class Merge_Video_Audio extends AsyncTask {

 ProgressDialog progressDialog;
 RecentCompletedVideoFragment context;
 public AsyncResponse delegate = null;


Bundle bundleValue;

String audio,video,output;

public Merge_Video_Audio(RecentCompletedVideoFragment context, Bundle bundle , AsyncResponse delegate ){
 this.context=context;
 this.bundleValue=bundle;
 this.delegate=delegate;
 progressDialog=new ProgressDialog(context.requireContext());
 progressDialog.setMessage("Please Wait...");
}

@Override
protected void onPreExecute() {
 super.onPreExecute();
}

@Override
public String doInBackground(String... strings) {
 try {
 progressDialog.show();
 }catch (Exception e){

 }
 audio=strings[0];
 video=strings[1];
 output=strings[2];

 Log.d("resp",audio+"----"+video+"-----"+output);

 Thread thread = new Thread(runnable);
 thread.start();

 return null;
}


@Override
protected void onPostExecute(String s) {
 super.onPostExecute(s);
 Log.e("On Post Execute ", "True");


}


 public void Go_To_preview_Activity(){

 delegate.processFinish(bundleValue);
 }

 public Track CropAudio(String videopath, Track fullAudio){
 try {

 IsoFile isoFile = new IsoFile(videopath);

 double lengthInSeconds = (double)
 isoFile.getMovieBox().getMovieHeaderBox().getDuration() /
 isoFile.getMovieBox().getMovieHeaderBox().getTimescale();


 Track audioTrack = (Track) fullAudio;


 double startTime1 = 0;
 double endTime1 = lengthInSeconds;


 long currentSample = 0;
 double currentTime = 0;
 double lastTime = -1;
 long startSample1 = -1;
 long endSample1 = -1;


 for (int i = 0; i < audioTrack.getSampleDurations().length; i++) {

 long delta = audioTrack.getSampleDurations()[i];

 if (currentTime > lastTime && currentTime <= startTime1) {
 // current sample is still before the new starttime
 startSample1 = currentSample;
 }
 if (currentTime > lastTime && currentTime <= endTime1) {
 // current sample is after the new start time and still before the new endtime
 endSample1 = currentSample;
 }

 lastTime = currentTime;
 currentTime += (double) delta / (double) audioTrack.getTrackMetaData().getTimescale();
 currentSample++;
 }

 CroppedTrack cropperAacTrack = new CroppedTrack(fullAudio, startSample1, endSample1);

 return cropperAacTrack;

 } catch (IOException e) {
 e.printStackTrace();
 }

 return fullAudio;
}



 public Runnable runnable =new Runnable() {
 @Override
 public void run() {

 try {

 Movie m = MovieCreator.build(video);


 List nuTracks = new ArrayList<>();

 for (Track t : m.getTracks()) {
 if (!"soun".equals(t.getHandler())) {

 Log.e("Track ",t.getName());
 nuTracks.add(t);
 }
 }

 Log.e("Path ",audio.toString());


 try {
 // Track nuAudio = new AACTrackImpl();
 Track nuAudio = new AACTrackImpl(new FileDataSourceImpl(audio));

 Track crop_track = CropAudio(video, nuAudio);

 nuTracks.add(crop_track);

 m.setTracks(nuTracks);

 Container mp4file = new DefaultMp4Builder().build(m);

 FileChannel fc = new FileOutputStream(new File(output)).getChannel();
 mp4file.writeContainer(fc);
 fc.close();

 }catch (FileNotFoundException fnfe){
 fnfe.printStackTrace();
 }catch(IOException ioe){
 ioe.printStackTrace();
 }


 try {

 progressDialog.dismiss();
 }catch (Exception e){
 Log.d("resp",e.toString());

 }finally {
 Go_To_preview_Activity();

 }

 } catch (IOException e) {
 e.printStackTrace();
 Log.d("resp",e.toString());

 }

 }

 };

 }



This solution is also not working in all the devices.


Can anyone suggest where i am going wrong or any solution for it ?


-
Corrupt AVFrame returned by libavcodec
2 janvier 2015, par informer2000As part of a bigger project, I’m trying to decode a number of HD (1920x1080) video streams simultaneously. Each video stream is stored in raw yuv420p format within an AVI container. I have a Decoder class from which I create a number of objects within different threads (one object per thread). The two main methods in Decoder are
decode()
andgetNextFrame()
, which I provide the implementation for below.When I separate the decoding logic and use it to decode a single stream, everything works fine. However, when I use the multi-threaded code, I get a segmentation fault and the program crashes within the processing code in the decoding loop. After some investigation, I realized that the data array of the
AVFrame
filled ingetNextFrame()
contains addresses which are out of range (according to gdb).I’m really lost here ! I’m not doing anything that would change the contents of the
AVFrame
in my code. The only place where I attempt to access the AVFrame is when I callsws_scale()
to convert the color format and that’s where the segmentation fault occurs in the second case because of the corruptAVFrame
. Any suggestion as to why this is happening is greatly appreciated. Thanks in advance.The
decode()
method :void decode() {
QString filename("video.avi");
AVFormatContext* container = 0;
if (avformat_open_input(&container, filename.toStdString().c_str(), NULL, NULL) < 0) {
fprintf(stderr, "Could not open %s\n", filename.toStdString().c_str());
exit(1);
}
if (avformat_find_stream_info(container, NULL) < 0) {
fprintf(stderr, "Could not find file info..\n");
}
// find a video stream
int stream_id = -1;
for (unsigned int i = 0; i < container->nb_streams; i++) {
if (container->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
stream_id = i;
break;
}
}
if (stream_id == -1) {
fprintf(stderr, "Could not find a video stream..\n");
}
av_dump_format(container, stream_id, filename.toStdString().c_str(), false);
// find the appropriate codec and open it
AVCodecContext* codec_context = container->streams[stream_id]->codec; // Get a pointer to the codec context for the video stream
AVCodec* codec = avcodec_find_decoder(codec_context->codec_id); // Find the decoder for the video stream
if (codec == NULL) {
fprintf(stderr, "Could not find a suitable codec..\n");
return -1; // Codec not found
}
// Inform the codec that we can handle truncated bitstreams -- i.e.,
// bitstreams where frame boundaries can fall in the middle of packets
if (codec->capabilities & CODEC_CAP_TRUNCATED)
codec_context->flags |= CODEC_FLAG_TRUNCATED;
fprintf(stderr, "Codec: %s\n", codec->name);
// open the codec
int ret = avcodec_open2(codec_context, codec, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open the needed codec.. Error: %d\n", ret);
return -1;
}
// allocate video frame
AVFrame *frame = avcodec_alloc_frame(); // deprecated, should use av_frame_alloc() instead
if (!frame) {
fprintf(stderr, "Could not allocate video frame..\n");
return -1;
}
int frameNumber = 0;
// as long as there are remaining frames in the stream
while (getNextFrame(container, codec_context, stream_id, frame)) {
// Processing logic here...
// AVFrame data array contains three addresses which are out of range
}
// freeing resources
av_free(frame);
avcodec_close(codec_context);
avformat_close_input(&container);
}The
getNextFrame()
method :bool getNextFrame(AVFormatContext *pFormatCtx,
AVCodecContext *pCodecCtx,
int videoStream,
AVFrame *pFrame) {
uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
char buf[1024];
int len;
int got_picture;
AVPacket avpkt;
av_init_packet(&avpkt);
memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
// read data from bit stream and store it in the AVPacket object
while(av_read_frame(pFormatCtx, &avpkt) >= 0) {
// check the stream index of the read packet to make sure it is a video stream
if(avpkt.stream_index == videoStream) {
// decode the packet and store the decoded content in the AVFrame object and set the flag if we have a complete decoded picture
avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, &avpkt);
// if we have completed decoding an entire picture (frame), return true
if(got_picture) {
av_free_packet(&avpkt);
return true;
}
}
// free the AVPacket object that was allocated by av_read_frame
av_free_packet(&avpkt);
}
return false;
}The lock management callback function :
static int lock_call_back(void ** mutex, enum AVLockOp op) {
switch (op) {
case AV_LOCK_CREATE:
*mutex = (pthread_mutex_t *) malloc(sizeof(pthread_mutex_t));
pthread_mutex_init((pthread_mutex_t *)(*mutex), NULL);
break;
case AV_LOCK_OBTAIN:
pthread_mutex_lock((pthread_mutex_t *)(*mutex));
break;
case AV_LOCK_RELEASE:
pthread_mutex_unlock((pthread_mutex_t *)(*mutex));
break;
case AV_LOCK_DESTROY:
pthread_mutex_destroy((pthread_mutex_t *)(*mutex));
free(*mutex);
break;
}
return 0;
}