
Recherche avancée
Autres articles (105)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)
Sur d’autres sites (14816)
-
FFMpeg fails to detect input stream when outputting to pipe's stdout
27 septembre 2020, par La bla blaWe have h264 frames as individual files, we read them to a python wrapper and piping them to ffmpeg.


ffmpeg subprocess is launched using


command = ["ffmpeg",
 "-hide_banner",
 "-vcodec", "h264",
 "-i", "pipe:0",
 "-video_size", "5120x3072",
 '-an', '-sn', # we want to disable audio processing (there is no audio)
 '-pix_fmt', 'bgr24',
 "-vcodec", "rawvideo",
 '-f', 'image2pipe', '-']
 pipe = sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, bufsize=10 ** 8)



Our goal is to use ffmpeg to convert the individual h264 frames into raw BGR data that we can manipulate using OpenCV.


the files are read in a background thread and piped using


...
 for path in files:
 with open(path, "rb") as f:
 data = f.read()
 pipe.stdin.write(data)



When we try to read the ffmpeg's output pipe using


while True:
 # Capture frame-by-frame
 raw_image = pipe.stdout.read(width * height * 3)



we get


[h264 @ 0x1c31000] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pipe:0: could not find codec parameters
Input #0, h264, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
Output file #0 does not contain any stream



However, when I change the
sp.Popen
command to be


 f = open('ffmpeg_output.log', 'wt')
 pipe = sp.Popen(command, stdin=sp.PIPE, stdout=f, bufsize=10 ** 8) # Note: the stdout is not f



we get the gibberish (i.e, binary data) in the
ffmpeg_output.log
file, and the console reads

[h264 @ 0xf20000] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0xf20000] decoding for stream 0 failed
Input #0, h264, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (Baseline), yuv420p, 5120x3072, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
 Metadata:
 encoder : Lavf56.40.101
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 5120x3072, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
 Metadata:
 encoder : Lavc56.60.100 rawvideo
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Invalid UE golomb code
 Last message repeated 89 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code



Why does ffmpeg cares if its stdout is a file or a pipe ?


-
Getting "OSError : [Errno 32] Broken pipe" while saving a numpy array using imageio.mimwrite into io.BytesIO() object
10 novembre 2020, par green_hunter434I am trying to save a numpy array (array of frames) as a video using imageio.mimwrite .


Operation like this works fine -


imageio.mimwrite('./temp4.mp4', frames, fps=fps, format='.mp4')



But, when I try something like this -


fo = io.BytesIO()
imageio.mimwrite(fo, frames, fps=fps, format='.mp4')



OR


writer = imageio.get_writer(fo, fps=fps, format='.mp4')

for im in frames:
 writer.append_data(imageio.imread(im))
writer.close()



I get the following error -


---------------------------------------------------------------------------
BrokenPipeError Traceback (most recent call last)
c:\program files (x86)\python38-32\lib\site-packages\imageio_ffmpeg\_io.py in write_frames(path, size, pix_fmt_in, pix_fmt_out, fps, quality, bitrate, codec, macro_block_size, ffmpeg_log_level, ffmpeg_timeout, input_params, output_params)
 478 try:
--> 479 p.stdin.write(bb)
 480 except Exception as err:

BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

OSError Traceback (most recent call last)
 in <module>
 1 for im in ret3:
----> 2 writer.append_data(im)
 3 writer.close()

c:\program files (x86)\python38-32\lib\site-packages\imageio\core\format.py in append_data(self, im, meta)
 500 im = asarray(im)
 501 # Call
--> 502 return self._append_data(im, total_meta)
 503 
 504 def set_meta_data(self, meta):

c:\program files (x86)\python38-32\lib\site-packages\imageio\plugins\ffmpeg.py in _append_data(self, im, meta)
 572 
 573 # Write. Yes, we can send the data in as a numpy array
--> 574 self._write_gen.send(im)
 575 
 576 def set_meta_data(self, meta):

c:\program files (x86)\python38-32\lib\site-packages\imageio_ffmpeg\_io.py in write_frames(path, size, pix_fmt_in, pix_fmt_out, fps, quality, bitrate, codec, macro_block_size, ffmpeg_log_level, ffmpeg_timeout, input_params, output_params)
 484 "OUTPUT:\n".format(err, cmd_str)
 485 )
--> 486 raise IOError(msg)
 487 
 488 nframes += 1

OSError: [Errno 32] Broken pipe

FFMPEG COMMAND:
c:\program files (x86)\python38-32\lib\site-packages\imageio_ffmpeg\binaries\ffmpeg-win32-v4.2.2.exe -y -f rawvideo -vcodec rawvideo -s 256x256 -pix_fmt rgb24 -r 29.97 -i - -an -vcodec libx264 -pix_fmt yuv420p -crf 25 -v warning C:\Users\pramu\AppData\Local\Temp\imageio_fh1fqkgc

FFMPEG STDERR OUTPUT:
</module>


I get the same error using anaconda ffmpeg and non-anaconda ffmpeg.


Is there a way to fix this error ?


Also, is there a better way to convert list of frames into .mp4 or serializing it for sending through an API ?.


Thank you !


-
How can I pipe output of ffmpeg to ffplay in java ?
30 septembre 2020, par guidop21currently i have created a java project that allows me to use ffmpeg to split a live video coming from a webcam and fragment it into many 60 second video clips.
These are the main methods :


@Override
public boolean startRecording(String fileName, Integer clipPrefix, String folderClipName) {
 final String methodName = "startRecording(fileName, folderClipPath)";
 this.startLog(methodName, fileName, folderClipName);
 try {
 File folder = new File(folderClipName + System.getProperty("file.separator") +fileName);
 if( !folder.exists() ) {
 folder.mkdir();
 }
 }catch(Exception e) {
 e.printStackTrace();
 }
 String fileNameFull = buildFileName(fileName, clipPrefix);
 String[] cmd = buildCommandArray(fileNameFull);
 logger.debug("{} - executing FfmpegStartRecordingCommand with cmd:{}", methodName, Arrays.toString(cmd));
 Process p = new FfmpegStartRecordingCommand().execute(cmd);
 try {
 Thread.sleep(1000);
 } catch (InterruptedException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 }
 return p.isAlive();
}

public String[] buildCommandArray(String fileNameFull) {
 String cmd[] = new String[37];
 cmd[0] = "-f";
 cmd[1] = "dshow";
 cmd[2] = "-rtbufsize";
 cmd[3] = "2000M";
 cmd[4] = "-i";
 cmd[5] = cameraName;
 cmd[6] = "-preset";
 cmd[7] = "slow";
 cmd[8] = "-codec:a";
 cmd[9] = "libfdk_aac";
 cmd[10] = "-b:a";
 cmd[11] = "128k";
 cmd[12] = "-codec:v";
 cmd[13] = "libx264";
 cmd[14] = "-pix_fmt";
 cmd[15] = "yuv420p";
 cmd[16] = "-b:v";
 cmd[17] = "1000k";
 cmd[18] = "-minrate";
 cmd[19] = "500k";
 cmd[20] = "-maxrate";
 cmd[21] = "2000k";
 cmd[22] = "-bufsize";
 cmd[23] = "2000k";
 cmd[24] = "-vf";
 cmd[25] = "scale=854:480";
 cmd[26] = "-f";
 cmd[27] = "segment";
 cmd[28] = "-segment_time";
 cmd[29] = "60";
 cmd[30] = "-reset_timestamps";
 cmd[31] = "1";
 cmd[32] = "-flush_packets";
 cmd[33] = "1";
 cmd[34] = "-loglevel";
 cmd[35] = "quiet";
 cmd[36] = fileNameFull;
 return cmd;
}



To these I wish I could use a method that allows me to use ffplay at the same time in order to see what I am acquiring ?
On the internet I found the following command


ffmpeg -i <video> out.mp4 | ffplay -i video
</video>


but i don't know how to implement it in java.
Some idea ??
Thanks in advance