Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (108)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (12666)

  • Mac terminal command to list files and sort by date to use in ffmpeg

    22 septembre 2020, par Jeff

    I am using a gopro to film a bunch of videos. I want to then take those videos directly from the SD card folder and concatenate them into a single video (bypass an editor) by using FFMPEG.

    


    I'm currently able to stitch together "chaptered" videos with the following example command on my Mac (10.13) :

    


    ffmpeg -f concat -safe 0 -i <(for f in /sdcardfolder/100GOPRO/GH*488.MP4; do echo "file '$f'"; done) -c copy /folder/video.mp4

    


    The reason for this is that the ffmpeg command requires a text file that looks like this :

    


    


    file '/folder/GH016992.MP4'

    
file '/folder/GH036990.MP4'

    
...

    


    


    The real command is this, which generates the list of files in the right format with file in front of each one and can be embedded into the ffmpeg command :

    


    for f in /Volumes/GoPro8/DCIM/100GOPRO/GH0*71*.MP4; do echo "file '$f'"; done

    


    I want to add 2 changes to this :

    


      

    1. List the files in date order (ascending) : I want the list of files to be in date order. But I can't figure out how to add a -sort or something to the for f in command.

      


    2. 


    3. Allow a more robust set of file matching/filtering : Right now I can add basic regex like GH*488.MP4 or, with chapters which increments the first number, something like GH0[123]488.MP4 would work to just get the first few. But when I change it to be more flexible like GH0[0-9]71[0-9][0-9].MP4 - which would be necessary to match all files that were recorded yesterday, but nothing before then, the command doesn't like this regex. It seems to only accept a *.

      


    4. 


    


    I looked at a few examples like https://opensource.com/article/19/6/how-write-loop-bash but there wasn't much more than just listing files.

    


    This boils down to a terminal command and isn't really related to FFMPEG but I hope it's helpful context.

    


    I imagined it would be something like this, but this definitely doesn't work :

    


    for f in (find /Volumes/GoPro8/DCIM/100GOPRO/GH0[0-9]71[0-9][0-9].MP4 -type f | sort); do echo "file '$f'"; done

    


    I'd appreciate any help ! Thanks !

    


    Update

    


    It looks like sorting isn't easy with Mac tools so I gave up and wrote a much simpler Ruby script that could execute everything for me. This is not really an answer to my question above but it is a solution.

    


    Here I can easily write the text file necessary for ffmpeg and I can also filter files with a regex on the name, filter for a particular date, and size. Then, via the script, simply execute the ffmpeg command with args to concat files. I can also have it immediately resample the file to compress it (gopro videos are giant and I'm ok with a much lower bitrate if I want to save raw footage).

    


    I got lucky with this Dir.entries in Ruby - it seems to automatically sort by date ? I don't know how to sort it otherwise.

    


    PATH = '/Volumes/GoPro8/DCIM/100GOPRO/'
NEW_FILENAME = '/folder/new-file.mp4'
video_list = '/folder/ffmpeg-list.txt'

# create the text file
File.delete(video_list) if File.exist?(video_list)
i = 1
Dir.entries(PATH).each do |f|
    d = File.mtime(PATH + f)
    size = File.size(PATH + f)
    if f.match(/GH0.*.MP4/) && d.to_s.match(/2020-07-30/) && size.to_i < 1000000000
        puts "#{i}\t#{f}\t#{d}\t#{size}"
        File.write(video_list, "file #{PATH + f}\n", mode: "a")
        i= i+1
    end
end

command = "ffmpeg -f concat -safe 0 -i #{video_list} -c copy #{NEW_FILENAME}"

puts "executing concatenate..."
puts command
system(command)


    


  • Small speedup with multiple processes for opencv

    27 juin 2017, par nji9

    In an application I’ve to analyse movie files
    (let’s say compute the differences of consecutive frame pairs).
    For this I use opencv (which uses ffmpeg as lib/ codecs).
    Depending on the video format there are different cpu loads/ uses.
    For wmv3 there seems to be not more than 1 core used.
    So it was close by hand to let multiple threads work on different parts of the movie,
    as the data is independent (beside having to stitch the parts afterwards).
    The code (stripped by the lap-parameter) is quite simple :

    int main(int argc, char *argv[])
    {
       const string source = "move.wmv";

       VideoCapture capt(source);
       if (!capt.isOpened())
       {
           cout  << "Could not open file " << source << endl;
           return -1;
       }

       unsigned short nThreads (8);

       double *pDiffArray = new double [(size_t) (capt.get(CV_CAP_PROP_FRAME_COUNT)];

       capt.release();

       ComputeDifferences (source, pDiffArray, nThreads);

       return 0;
    }

    int ComputeDifferences (const string& source, double *pDiffArray, const unsigned short& nThreads)
    {
       std::vector threadVector;

       for (unsigned int i=0; i< nThreads; i++)
           threadVector.push_back (new std::thread (ComputePart, source, pDiffArray, nThreads, i));

       for (unsigned int i=0; i< nThreads; i++)
           threadVector.at (i)->join();

       // Stitching
       ;

       return 0;
    };


    void ComputePart (const string source, double *pDiffArray,
                     const unsigned int& nThreads, const unsigned int& nThreadNo)
    {
       VideoCapture capt(source);
       if (!capt.isOpened())
       {
           cout  << "Could not open file " << source << endl;
       }

       size_t startPosDiffArray;

       startPosDiffArray = nThreadNo * (capt.get(CV_CAP_PROP_FRAME_COUNT) / nThreads);

       size_t sizePart (capt.get(CV_CAP_PROP_FRAME_COUNT) / nThreads);

       size_t startPosFrame;

       startPosFrame = capt.get(CV_CAP_PROP_FRAME_COUNT) / nThreads * nThreadNo;

       capt.set(CAP_PROP_POS_FRAMES, startPosFrame);

       Size refS = Size((int) capt.get(CAP_PROP_FRAME_WIDTH),
                        (int) capt.get(CAP_PROP_FRAME_HEIGHT));

       Mat frame, frameRes;
       std::array frameDuo;
       Scalar s;

       capt >> frameDuo [0];
       if (!frameDuo [0].data)
           return;

       for (size_t i = 1; i < sizePart; i++) {
           capt >> frameDuo [i%2];

           if (!frameDuo [i%2].data)
               break;

           absdiff (frameDuo [(i-1)%2], frameDuo [i%2], frameRes);

           s = sum (frameRes);

           pDiffArray [i-1+startPosDiffArray] = (s [0] + s [1] + s [2])/ (refS.height * refS.width);
       }

       capt.release();
    }

    If I use it on a wmv3 video, 1280x720, abt. 50,000 frames,
    I get this speedups (at an Intel i7), relative to single thread (190 sec).

    • MT2 1.8
    • MT4 2.6
    • MT8 3.0

    Beside being very disappointed I do not understand what is happening here.
    I do know Amdahl’s law etc., but in this case I expected a far better speedup.
    Does anyone have a hint for me (being a newbie on that) ?
    It’s not the positioning (capt.set ()), as disabling that doesn’t change anything.
    Is it about ffmpeg-lib, opencv, thread-switch of std-lib, working set problem ?

    [Edit :

    As of a hint in the comments I found that 80% of the time is used in

    capt >> frameDuo [i%2];

    This consists of reading from file, decoding and copying into opencv structure.
    And from this only the reading from file is of "sequential type" (in Amdahl’s sense).
    As the HDD doesn’t show heavy access (even when MT8), and there is no difference
    when using a quick SSD I don’t understand why this sequential part should have such a big effect.
    How is it possible that 8 cores are fully working but only have a speedup of 3 ?
    And : how can I do better ?]

  • ffmpeg with multiple live-stream inputs adds async delay after filter

    12 janvier 2021, par Godmar

    I am struggling to apply ffmpeg for remote control of autonomous truck.

    



    There are 3 video streams from cameras in local network, described with .sdp files like this one (MJPEG over RTP, correct me if I'm wrong) :
m=video 50910 RTP/AVP 26
c=IN IP4 192.168.1.91

    



    I want to make a single video stream from three pictures combined using this :

    



    ffmpeg -hide_banner -protocol_whitelist "rtp,file,udp" -i "cam1.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam2.sdp" \
-protocol_whitelist "rtp,file,udp" -i "cam3.sdp" \
-filter_complex "\
nullsrc=size=1800x600 [back]; \
[back][b]overlay=1000[tmp1]; \
[tmp1][c]overlay=600[tmp2]; \
[tmp2][a]overlay" \
-vcodec libx264 \
-crf 25 -maxrate 4M -bufsize 8M -r 30 -preset ultrafast -tune zerolatency \
-f mpegts udp://localhost:1234


    



    When i launch this, the ffmpeg starts sending errors about RTP packets being lost. In the output the fps of every camera seems unstable, so this is unacceptable.
I am able to launch ffplay or mplayer on three cameras simultaneously. And I also can make such stream using pre-recorded videofile as input. So it seems like the ffmpeg just can't read three UDP streams so fast.
The cameras are streaming at 10 Mbit/s, 800x600, 30 fps MJPEG, and those are the minimal settings I can afford, but the cameras can do much more.

    



    So I tried to do something to the size of UDP buffer. Well, there is a possibility to setup buffer_size and fifo_size for a UDP stream, but no such option for a stream described with an .sdp file. Even though I've found a way to run the stream with rtp://-like URL, but it doesn't seem to pass the arguments after ' ?' to the UDP.

    



    My next idea was to launch multiple ffmpeg instances and receive the streams separately, process them and re-stream to another instance, which would consume any kind of stream, stitch them together and send out. That would actually be a good setup, since I need to filter the streams individually, crop them, lenscorrect, rotate, and maybe a large -filter_complex on a single ffmpeg instance would not handle all the streams. And I'm going to have 3 more of them.

    



    I tried to implement this setup using 3 fifopipe or using 3 udp://localhost:124x internal streams. None of the approaches solved my problem, but the separated ffmpeg instances seem to be able to receive three streams simultaneously.
I was able to open the repeated stream through pipes and through UDP via mplayer or ffplay. They are completely synced and live
The stitching still fails miserably.
The pipes got me a few seconds delays for cameras, and after stitching streams were choppy and out of sync.
The udp :// got me a smooth video stream as a result, but one camera has 5 sec delay, and the others have 15 and 25.

    



    This smells like buffer. Changing the fifo_size and buffer_size doesn't seem to influence much.
I tried to add local time timestamp in re-streamer instances - this is how I found the 5, 15, 25sec delays.
I tried to add frame timestamp in stitcher instance - they come out completely synced. So setpts=PTS-STARTPTS doesn't work either.

    



    So, the buffer happens between the udp :// socket and the -filter_complex input. How do I get rid of it ? How do you like my workaround ? Am I doing it completely wrong ?