Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (84)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9800)

  • How to Concatenate bunch of videos using python ?

    5 juillet 2023, par Saikat Chakraborty

    So, I have over 5000 small clips that I need to combine. To apply various custom filter over their names, I want to do it with python. I have the following code :

    


    import os
from moviepy.editor import *
os.chdir('D:/videos')
list1, list2 = os.listdir(), []

for i in list1: #filtering
    if i[-6:] != '-l.mp4' and i[-7:] != 'ALT.mp4':
        list2.append(i)
print('Getting Video Info:')

final = VideoFileClip(list2[0])


for i in range(1,len(list2)):
    final = concatenate_videoclips([final, VideoFileClip(list2[i])])
    print('\r' + str(i+1) + '/' + str(len(list2)), end='')


os.chdir('D:')
final.write_videofile('Merged.mp4')


    


    But the program is creating lots of processes and just after reading 150 clips it's crashing.
enter image description here
It keeps increasing !
Is there any easier way/alternative to do this ? Thanks !

    


    Edit :
    
I've tried using ffmpeg too, but concatenation removes the audio since concat protocol doesn't support .mp4 extension. In that case. Even if I convert all the files to .ts extension and try to concatenate them,WindowsError: [Error 206] The filename or extension is too long pops up because too many files are separated by |. I did the following changes after converting all the files to .ts format :

    


    import os
import ffmpeg
os.chdir('D:/videos')
list1 = os.listdir()
list2 = [i for i in list1 if i[-3:] == '.ts']
list2[0] = ffmpeg.input(list2[0])
for i in range(1, len(list2)):
    list2[i] = ffmpeg.concat(list2[i-1], ffmpeg.input(list2[i]))
    print('\r' + str(i) + '/' + str(len(list2)), end='')
ffmpeg.output(list2[-1], 'D:\Merged.mp4')
ffmpeg.run(list2[-1])


    


    But now I'm getting RecursionError: maximum recursion depth exceeded while calling a Python object.

    


  • how to fade in and out background music when speech play and fade out audio at the end and mix them together using ffmpeg

    3 avril 2019, par Harshil Dholakiya

    I am using below command in my project to mix background music with my speech audio and background music slow down volume level when speech play :

    ffmpeg -y -i speech.mp3 -stream_loop -1 -i music.mp3 -filter_complex "[0:a]asplit=2[sc][mix];[1:a][sc]sidechaincompress=threshold=0.01:ratio=5[bg];[mix][bg]amix=inputs=2:duration=first:dropout_transition=0[final]" -map [final] finalAudio.mp3

    i want to add this two filter to audio :

    1) Fade-in effect to background music for 1.5 sec when speech arrive and fade-out effect when speech is finish for 1.5 sec. speech.mp3 has more than one speech and silence in between speech.

    2) Fade out effect at the end of audio for 1.5 sec.

    can anyone help me to achieve both of above filter using ffmpeg with my above command ?

  • Xuggler Encoding video of Desktop With Audio - audio has gaps

    2 novembre 2012, par Chris

    I am using Xuggler to convert images captured from the java Robot class and sound read from TargetDataLine class and encoding this into a video. I am then attempting to http stream this video data (after writing my header) to a flash client via http (Socket OutputStream) but it plays and stutters (never just playing smoothly) no matter what buffer value I use on the client side.

    I am asking for help and showing my java code because I suspect it might be to do with how I am encoding the video or something about sending data via http socket which i am not getting..

    ByteArrayURLHandler ba = new ByteArrayURLHandler();
    final IRational FRAME_RATE = IRational.make(30);
    final int SECONDS_TO_RUN_FOR = 20;
    final Robot robot = new Robot();
    final Toolkit toolkit = Toolkit.getDefaultToolkit();
    final Rectangle screenBounds = new Rectangle(toolkit.getScreenSize());
    IMediaWriter writer;

    writer = ToolFactory.makeWriter(
       XugglerIO.map(
           XugglerIO.generateUniqueName(out, ".flv"),
           out
       ));

    writer.addListener(new MediaListenerAdapter() {
       public void onAddStream(IAddStreamEvent event) {
           event.getSource().getContainer().setInputBufferLength(1000);
           IStreamCoder coder = event.getSource().getContainer().getStream(event.getStreamIndex()).getStreamCoder();
           if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO) {
               coder.setFlag(IStreamCoder.Flags.FLAG_QSCALE, false);  
               coder.setBitRate(32000);
               System.out.println("onaddstream"+ coder.getPropertyNames().toString());
           }
           if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
               // coder.setBitRate(64000);
               // coder.setBitRateTolerance(64000);
           }
       }
    });

    writer.addVideoStream(videoStreamIndex, videoStreamId, 1024, 768);
    final int channelCount = 1;      

    int audionumber =   writer.addAudioStream(audioStreamIndex, audioStreamId,1, 44100);
    int bufferSize = (int)audioFormat.getSampleRate()   *audioFormat.getFrameSize();//*6;///6;
    byte[] audioBuf;// = new byte[bufferSize];

    int i = 0;

    final int audioStreamIndex = 1;
    final int audioStreamId = 1;
    BufferedImage screen, bgrScreen;
    long startTime = System.nanoTime();
    while(keepGoing)
    {

       audioBuf = new byte[bufferSize];
       i++;

       screen = robot.createScreenCapture(screenBounds);

       bgrScreen = convertToType(screen, BufferedImage.TYPE_3BYTE_BGR);
       long nanoTs = System.nanoTime()-startTime;
       writer.encodeVideo(0, bgrScreen, nanoTs, TimeUnit.NANOSECONDS);
       audioBuf = new byte[line.available()];
       int nBytesRead = line.read(audioBuf, 0, audioBuf.length);

       IBuffer iBuf = IBuffer.make(null, audioBuf, 0, nBytesRead);

       IAudioSamples smp = IAudioSamples.make(iBuf,1,IAudioSamples.Format.FMT_S16);
       if (smp == null) {
           return;
       }

       long numSample = audioBuf.length / smp.getSampleSize();

       smp.setComplete(true, numSample,(int)
       audioFormat.getSampleRate(), audioFormat.getChannels(),
       IAudioSamples.Format.FMT_S16, nanoTs/1000);

       writer.encodeAudio(1, smp);

       writer.flush();
    }