
Recherche avancée
Autres articles (54)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10428)
-
Reading file stream from Google Cloud Storage to ffmpeg (using fluent-ffmpeg)
27 juillet 2018, par ekuusiI’m trying to run ffmpeg on a NodeJS backend with fluent-ffmpeg, reading input files from Google Cloud Storage. Everything works fine if I download the file first :
const file = storage
.bucket('example_bucket')
.file('examplefile.mp4');
file.download({destination: 'test.mp4'}, (err) => {
let command = ffmpeg()
.input('test.mp4')
.duration(10)
.format('mp4');
command.save('test_out.mp4');
});
res.json([{
message: 'Command sent!'
}]);But if I try to use a readable stream as the input, it fails :
const file = storage
.bucket('example_bucket')
.file('examplefile.mp4');
var filestream = file.createReadStream()
let command = ffmpeg()
.input(filestream)
.duration(10)
.format('mp4');
command.save('test_out.mp4');
});
res.json([{
message: 'Command sent!'
}]);Here is the full output of ffmpeg when trying to do the conversion. It seems to read the details of the file fine but for some reason it fails, saying "Cannot process video : ffmpeg exited with code 1 : pipe:0 : Invalid data found when processing input"
Spawned Ffmpeg with command: ffmpeg -i pipe:0 -y -t 10 -f mp4 test_out.mp4
Stderr output: ffmpeg version 4.0.1 Copyright (c) 2000-2018 the FFmpeg developers
Stderr output: built with Apple LLVM version 9.1.0 (clang-902.0.39.2)
Stderr output: configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-avresample --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-librsvg --enable-libtheora --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libsoxr --enable-libspeex --enable-libass --enable-libbluray --enable-lzma --enable-gnutls --enable-fontconfig --enable-libfreetype --enable-libfribidi --disable-libjack --disable-libopencore-amrnb --disable-libopencore-amrwb --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --disable-indev=jack --enable-opencl --disable-outdev=xv --enable-audiotoolbox --enable-videotoolbox --enable-sdl2 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --arch=x86_64 --enable-x86asm --enable-libx265 --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
Stderr output: libavutil 56. 14.100 / 56. 14.100
Stderr output: libavcodec 58. 18.100 / 58. 18.100
Stderr output: libavformat 58. 12.100 / 58. 12.100
Stderr output: libavdevice 58. 3.100 / 58. 3.100
Stderr output: libavfilter 7. 16.100 / 7. 16.100
Stderr output: libavresample 4. 0. 0 / 4. 0. 0
Stderr output: libswscale 5. 1.100 / 5. 1.100
Stderr output: libswresample 3. 1.100 / 3. 1.100
Stderr output: libpostproc 55. 1.100 / 55. 1.100
Stderr output: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fb6db80a200] stream 2, offset 0x30: partial file
Stderr output: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fb6db80a200] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1920x1080, 4647 kb/s): unspecified pixel format
Stderr output: Consider increasing the value for the 'analyzeduration' and 'probesize' options
Stderr output: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pipe:0':
Stderr output: Metadata:
Stderr output: major_brand : isom
Stderr output: minor_version : 512
Stderr output: compatible_brands: isomiso2avc1mp41
Stderr output: encoder : Lavf58.12.100
Stderr output: location-eng : +60.2121+024.8754/
Stderr output: location : +60.2121+024.8754/
Stderr output: Duration: 00:02:23.13, bitrate: N/A
Stderr output: Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), none, 1920x1080, 4647 kb/s, SAR 1:1 DAR 16:9, 59.94 fps, 59.94 tbr, 60k tbn, 120k tbc (default)
Stderr output: Metadata:
Stderr output: handler_name : VideoHandler
Stderr output: Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
Stderr output: Metadata:
Stderr output: handler_name : SoundHandler
Stderr output: Stream #0:2(eng): Data: none (tmcd / 0x64636D74)
Stderr output: Metadata:
Stderr output: handler_name : TimeCodeHandler
Stderr output: Stream mapping:
Input is aac (mp4a / 0x6134706D) audio with h264 (avc1 / 0x31637661) video
Stderr output: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Stderr output: Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Stderr output: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fb6db80a200] stream 0, offset 0x34: partial file
Stderr output: pipe:0: Invalid data found when processing input
Stderr output: Cannot determine format of input stream 0:0 after EOF
Stderr output: Error marking filters as finished
Stderr output: Conversion failed!
Stderr output:
Cannot process video: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed! -
Python asyncio subprocess code returns "pipe closed by peer or os.write(pipe, data) raised exception."
4 novembre 2022, par Duke DougalI am trying to convert a synchronous Python process to asyncio. Any ideas what I am doing wrong ?


This is the synchronous code which successfully starts ffmpeg and converts a directory of webp files into a video.


import subprocess
import shlex
from os import listdir
from os.path import isfile, join

output_filename = 'output.mp4'
process = subprocess.Popen(shlex.split(f'ffmpeg -y -framerate 60 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 output.mp4'), stdin=subprocess.PIPE)

thepath = '/home/ubuntu/webpfiles/'
thefiles = [f for f in listdir(thepath) if isfile(join(thepath, f))]
for filename in thefiles:
 absolute_path = f'{thepath}{filename}'
 with open(absolute_path, 'rb') as f:
 process.stdin.write(f.read())

process.stdin.close()
process.wait()
process.terminate()



This async code fails :


from os import listdir
from os.path import isfile, join
import shlex
import asyncio

outputfilename = 'output.mp4'

async def write_stdin(proc):
 thepath = '/home/ubuntu/webpfiles/'
 thefiles = [f for f in listdir(thepath) if isfile(join(thepath, f))]
 thefiles.sort()
 for filename in thefiles:
 absolute_path = f'{thepath}{filename}'
 with open(absolute_path, 'rb') as f:
 await proc.communicate(input=f.read())

async def create_ffmpeg_subprocess():
 bin = f'/home/ubuntu/bin/ffmpeg'
 params = f'-y -framerate 60 -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 {outputfilename}'
 proc = await asyncio.create_subprocess_exec(
 bin,
 *shlex.split(params),
 stdin=asyncio.subprocess.PIPE,
 stdout=asyncio.subprocess.PIPE,
 stderr=asyncio.subprocess.PIPE,
 )
 return proc

async def start():
 loop = asyncio.get_event_loop()
 proc = await create_ffmpeg_subprocess()
 task_stdout = loop.create_task(write_stdin(proc))
 await asyncio.gather(task_stdout)

if __name__ == '__main__':
 asyncio.run(start())



The output for the async code is :


pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.
pipe closed by peer or os.write(pipe, data) raised exception.



etc - one line for each webp file


-
Grabbing a single image from a MS VS .NET 6.0 C# WPF process always returns the same image
12 mai 2023, par Wolfgang KurzI am trying to develop a MS VS .NET 6.0 C# WPF application to digitize my Super 8 cine films. To grab the frames of the film I want to use FFMPEG because Accord.NET6.0 DirectShow does not work under MS VS 2022 .
I use
FFmpeg 64-bit static Windows build from www.gyan.dev
Version : 2022-09-29-git-8089fe072e-full_build-www.gyan.dev


FFMPEG is invoked from a Process in ma application whicch sets the process FFMPEG start parameters. The webcam parameter is "USB Webcam" ( digital Celestron handheld microscope connected via USB - is DirectShow compatible )
The video resolution parameter "camRes" is "1280x960"


public void UpdateImage(string aString)
 {
 string startPath = "C:" + MainWindow.bSl + "ffmpeg" + MainWindow.bSl + "bin" + MainWindow.bSl + "ffmpeg.exe ";

 MainWindow.aResult = "";
 System.Drawing.Bitmap result;
 ProcessStartInfo psi;
 psi = new ProcessStartInfo();
 psi.FileName = startPath;
 string targPath;
 psi.Arguments = "-f dshow -video_size" + MainWindow.dQuote + webCam + MainWindow.dQuote +
 " -framerate 10 -i video=" + MainWindow.dQuote + camRes + MainWindow.dQuote +
 " -frames:v 1 test%3d.bmp -update 1";
 string errors = "";
 string results = "";
 psi.CreateNoWindow = false;
 psi.RedirectStandardOutput = true;
 psi.RedirectStandardError = true;
 psi.WindowStyle = ProcessWindowStyle.Normal;
 psi.WorkingDirectory = "C:" + MainWindow.bSl + "ffmpeg" + MainWindow.bSl + "bin" + MainWindow.bSl;

 using (Process theProcess = new Process())
 {
 theProcess.StartInfo = psi;
 theProcess.Start();

 theProcess.WaitForExit();
 while (theProcess.HasExited == false)
 {
 Thread.Sleep(50);
 }
 Thread.Sleep(50);

 try
 {
 if (File.Exists(pathCI) == true)
 {
 DefineImage.ffmpegRes = new Bitmap(pathCI);
 MainWindow.actMWInstance.UpdateMessage(DateTime.Now +
 "- Old image disposed, new image grabbed");
 File.Delete(pathCI);
 }
 }
 catch (System.Exception ex)
 {
 MainWindow.actMWInstance.UpdateMessage(DateTime.Now + "Image grabbing failed");
 }
 theProcess.Close();
 theProcess.Dispose();
 }

 if (aString.Length == 0)
 {
 File.Delete(pathCI);
 MainWindow.actMWInstance.DoMove("200", "0", true);
 }
 if (ffmpegRes != null)
 {
 BitmapSource aBMPSrc = BitmapConversion.ToWpfBitmap(ffmpegRes);
 IMGFrame.Source = aBMPSrc;
 }
 Show();
 }



The attached screenshot shows the expected GUI content.
https://www.wkurz.com/wkurz/images/FFMPEGTEST.jpg


My problem now is, when I try to refresh the image in the
DefineDialog
window, I always get the same image although the film has been moved by one frame.

Is the first image cached by FFMPEG and always used again with the provided parameters.


How to force FFMPEG to replace the image with a new one.



-----------------------------------------------------------------------+this is additional information to the question.
I have managed to grab the film frame but still get always the same image during a session :*




It is obvious that the cine film frame has been successfully grabbbed.
The next step should be to work at the image ( correct brightness contrast, gama and colors) and then store the evaluated correction values that have later to be applied to the frames during a batch process, which automatically transports the film, grabs the frames and corrects the colors and then stores the frames as consecutively numbered images which than can be used to generate a video with FFMPEG :


Now the problem I cannot overcome :


In the definition dialog there are 2 buttons ( Refresh and New Image). The intension is to reread / regrab a frame, if the current frame is not optimal to evaluate the correction values.


Whenever I click one of the buttons FFMPEG seems to grab an image, but FFMPEG provides always the same (identcial to the first already available image) to the caller ( a .NET C# Process in my proggram).


The output stream is a single jpg image ( test%04d.jpg ) in a specified target library . Here follows the generated output which seem to be OK according to my knowledge.


I want to ask all interested persons to look over this provided information - may be someone has an idea, what causes the fact, that FFMPEG 6.0 van Neumann always delivers the first image grabbed. in a session.
It seems that there is something cached and reused over and over again.


DefImg 147: UpdateImage started ( Lines starting with DefImg or DefineImage are test output statements provided by the C# program)



DefineImage 155 11.05.2023 17:53:20 C :\ffmpeg\bin\ffmpeg.exe -y -nostdin -f dshow -an -video_size 1280x960 -framerate 10 -i video="USB Microscope" -filter:v "smartblur=luma_radius=0.9:luma_strength=0.7:luma_threshold=0" -frames:v 1 C :\FilmProjList\TEST\TEST%%04d.jpg -update 1
DefineImage 180 ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration : —enable-gpl —enable-version3 —enable-static —disable-w32threads —disable-autodetect —enable-fontconfig —enable-iconv —enable-gnutls —enable-libxml2 —enable-gmp —enable-bzlib —enable-lzma —enable-libsnappy —enable-zlib —enable-librist —enable-libsrt —enable-libssh —enable-libzmq —enable-avisynth —enable-libbluray —enable-libcaca —enable-sdl2 —enable-libaribb24 —enable-libdav1d —enable-libdavs2 —enable-libuavs3d —enable-libzvbi —enable-librav1e —enable-libsvtav1 —enable-libwebp —enable-libx264 —enable-libx265 —enable-libxavs2 —enable-libxvid —enable-libaom —enable-libjxl —enable-libopenjpeg —enable-libvpx —enable-mediafoundation —enable-libass —enable-frei0r —enable-libfreetype —enable-libfribidi —enable-liblensfun —enable-libvidstab —enable-libvmaf —enable-libzimg —enable-amf —enable-cuda-llvm —enable-cuvid —enable-ffnvcodec —enable-nvdec —enable-nvenc —enable-d3d11va —enable-dxva2 —enable-libvpl —enable-libshaderc —enable-vulkan —enable-libplacebo —enable-opencl —enable-libcdio —enable-libgme —enable-libmodplug —enable-libopenmpt —enable-libopencore-amrwb —enable-libmp3lame —enable-libshine —enable-libtheora —enable-libtwolame —enable-libvo-amrwbenc —enable-libilbc —enable-libgsm —enable-libopencore-amrnb —enable-libopus —enable-libspeex —enable-libvorbis —enable-ladspa —enable-libbs2b —enable-libflite —enable-libmysofa —enable-librubberband —enable-libsoxr —enable-chromaprint
libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
libpostproc 57. 1.100 / 57. 1.100
Trailing option(s) found in the command : may be ignored.
Input #0, dshow, from 'video=USB Microscope' :
Duration : N/A, start : 5368.514437, bitrate : N/A
Stream #0:0 : Video : rawvideo (YUY2 / 0x32595559), yuyv422(tv, bt470bg/bt709/unknown), 1280x960, 10 fps, 10 tbr, 10000k tbn
Stream mapping :
Stream #0:0 -> #0:0 (rawvideo (native) -> mjpeg (native))
[swscaler @ 000001442fdcebc0] deprecated pixel format used, make sure you did set range correctly ( that I do not understand )
Last message repeated 3 times
[mjpeg @ 00000144278b0d80] removing common factors from framerate
Output #0, image2, to 'C :\FilmProjList\TEST\TEST%04d.jpg' :
Metadata :
encoder : Lavf60.3.100
Stream #0:0 : Video : mjpeg, yuvj422p(pc, bt470bg/bt709/unknown, progressive), 1280x960, q=2-31, 200 kb/s, 10 fps, 10 tbn
Metadata :
encoder : Lavc60.3.100 mjpeg
Side data :
cpb : bitrate max/min/avg : 0/0/200000 buffer size : 0 vbv_delay : N/A
frame= 0 fps=0.0 q=3.6 size= 0kB time=00:00:00.00 bitrate=N/A speed=N/A

frame= 1 fps=0.0 q=3.6 Lsize=N/A time=00:00:00.00 bitrate=N/A speed= 0x

video:45kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : unknown

DefineImage 248 11.05.2023 17:53:23 ( test output from, the C# program)
Das Programm "[11724] Cine2Video.exe" wurde mit Code 0 (0x0) beendet.




—


Mit freundlichen Grüßen / Best regards
Ute & Wolfgang Kurz
Domaine : https://uwkurz.de ; Homepage : https://www.uwkurz.de/home
Location : 9° 11' 27,75" East, 48° 43' 32,80" North
E-Mail : wolfgang@uwkurz.de , ute@uwkurz.de
Gesendet über Glasfaser von HOMENET.de