
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (53)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (7482)
-
JavaFXFrameConverter consuming insane amounts of memory
17 juillet 2023, par iexavThe JavaFXFrameConverter (from the wrapper of the ffmpeg C library in java) convert() method is consuming an outrageous amount of memory. To elaborate a bit more, it does not happen usually. If I just make an instance of the class in my main method, grab a frame via FFMpegFrameGrabber and give it to the convert() method the memory usage is pretty much none. However, when I attempt to do pretty much the exact same in a class I made using an ExecutorService my memory usage jumps up to 8 gigabytes when convert is called. The converter and executor service are declared as member variables of my class. Namely :


final JavaFXFrameConverter converter = new JavaFXFrameConverter();
 private ExecutorService videoExecutor;



(the videoExecutor is instantiated in the constructor of the class :


videoExecutor=Executors.newSingleThreadExecutor();



Now, the method I am using for processing of the video frames is this :


private void processVideo(){
 videoExecutor.submit(() -> {
 processingVideo.set(true);
 try{

 while(processingVideo.get() && (videoQueue.peek())!=null){

 final Frame cloneFrame = videoQueue.poll();
 final Image image = converter.convert(cloneFrame);
 final long timeStampDeltaMicros = cloneFrame.timestamp - timer.elapsedMicros();
 if (timeStampDeltaMicros > 0) {
 final long delayMillis = timeStampDeltaMicros / 1000L;
 try {
 Thread.sleep(delayMillis);
 } catch (InterruptedException e) {
 Thread.currentThread().interrupt();
 }
 }

 cloneFrame.close();
 System.out.println("submitted image");
 videoListener.submitData(image);
 }
 }catch (NullPointerException e){
 NullPointerException ex = new NullPointerException("Error while processing video frames.");
 videoListener.notifyFailure(ex);
 }

 processingVideo.set(false);

 });
 }



I sent the whole method for a bit more context but realistically speaking the only part that is of real significance is the converter.conver(cloneFrame) ; I used the intelliJ profiler and also the debugger and this is exactly where the problem occurs. When convert is called after doing some stuff it eventually ends up in this method :


public <t extends="extends" buffer="buffer"> void getPixels(int x, int y, int w, int h, WritablePixelFormat<t> pixelformat, T buffer, int scanlineStride) {
 int fss = this.frame.imageStride;
 if (this.frame.imageChannels != 3) {
 throw new UnsupportedOperationException("We only support frames with imageChannels = 3 (BGR)");
 } else if (!(buffer instanceof ByteBuffer)) {
 throw new UnsupportedOperationException("We only support bytebuffers at the moment");
 } else {
 ByteBuffer bb = (ByteBuffer)buffer;
 ByteBuffer b = (ByteBuffer)this.frame.image[0];

 for(int i = y; i < y + h; ++i) {
 for(int j = x; j < x + w; ++j) {
 int base = 3 * j;
 bb.put(b.get(fss * i + base));
 bb.put(b.get(fss * i + base + 1));
 bb.put(b.get(fss * i + base + 2));
 bb.put((byte)-1);
 }
 }

 }
 }
</t></t>


Now, everything up until this point is fine. The memory usage is at around 130mb but alas, when execution enters in these 2 for loops that's where the downright stupid memory usage occurs. Every single one of these bb.put calls is netting me around 3 more megabytes of memory usage. By the end of it you can probably guess what happens. Also all of these memory allocations do happen on the stack so I'm assuming that's why my memory usage stops at around 8-8.5 gigabytes otherwise the program would crash (that has also happened, out of memory exception thrown, but it doesn't usually happen, it kind of just lingers at those 8 gigabytes.) Frankly speaking I'm at a bit of a loss. I haven't seen virtually anyone anywhere mention this ever and I ran out of things to try to fix this so I am making this post.


By the way another thing I tried is make the ExecutorService in the same class as the main method and when I submitted there I also didn't have these memory problems.


-
naming dual audio tracks with ffmpeg
22 octobre 2012, par AzevedoI'm using ffmpeg to combine 1 video + 2 audio tracks into a mp4 container.
How do I set the name of each track ? (so in the player it won't be '0' and '1')
ffmpeg -i file1.avi -i extra-audio.mp4 -c copy -map 0:0 -map 0:1 -map 1:0 final.mp4
thanks !
-
How to pass BytesIO image objects to ffmpeg ?
13 avril 2023, par Mr.SlowI a have a (nested) list od BytesIO objects (images) that I would like to pass to ffmpeg and make a video. I do know, the ffmpeg cannot take it straight. What should I convert it in first ? There might be a better way using 'pipe :', which I did not succeed to implement yet.
(in this example code I ignore image duration and audio, too)


def merge_videos(file_id: float, audio_list: List[BinaryIO], duration_list: List[float], images_nested_list):
 # flatten the nested list of images
 images_list = [image for images_sublist in images_nested_list for image in images_sublist]
 
 additional_parameters = {'c:a': 'aac', 'c:v': 'libx264'}

 # Create a BytesIO object to hold the output video data
 output_data = io.BytesIO()

 # create the FFmpeg command with the specified parameters and pipe the output to the BytesIO object
 command = ffmpeg.output(*images_list, '-', vf='fps=10,format=yuv420p', preset='veryfast', shortest=None, r=10, max_muxing_queue_size=4000, **additional_parameters).pipe(output_data)

 try:
 # run the FFmpeg command with error and output capture
 subprocess.check_output(['ffmpeg', '-y', '-f', 'concat', '-safe', '0', '-i', 'audio.txt', '-i', '-', '-c:v', 'copy', '-c:a', 'aac', f"{PROJECT_PATH}/data/final-{file_id}.mp4"], input=output_data.getvalue())
 log.info("Final video with file_id %s has been converted successfully", file_id)



...this code returns :


TypeError: Expected incoming stream(s) to be of one of the following types: ffmpeg.nodes.FilterableStream; got <class>
</class>


How to handle it please ? Thanks for help.