
Recherche avancée
Autres articles (89)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Configuration spécifique pour PHP5
4 février 2011, parPHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
Modules spécifiques
Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (5384)
-
Best approach to real time http streaming to HTML5 video client
12 octobre 2016, par deandobI’m really stuck trying to understand the best way to stream real time output of ffmpeg to a HTML5 client using node.js, as there are a number of variables at play and I don’t have a lot of experience in this space, having spent many hours trying different combinations.
My use case is :
1) IP video camera RTSP H.264 stream is picked up by FFMPEG and remuxed into a mp4 container using the following FFMPEG settings in node, output to STDOUT. This is only run on the initial client connection, so that partial content requests don’t try to spawn FFMPEG again.
liveFFMPEG = child_process.spawn("ffmpeg", [
"-i", "rtsp://admin:12345@192.168.1.234:554" , "-vcodec", "copy", "-f",
"mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov",
"-" // output to stdout
], {detached: false});2) I use the node http server to capture the STDOUT and stream that back to the client upon a client request. When the client first connects I spawn the above FFMPEG command line then pipe the STDOUT stream to the HTTP response.
liveFFMPEG.stdout.pipe(resp);
I have also used the stream event to write the FFMPEG data to the HTTP response but makes no difference
xliveFFMPEG.stdout.on("data",function(data) {
resp.write(data);
}I use the following HTTP header (which is also used and working when streaming pre-recorded files)
var total = 999999999 // fake a large file
var partialstart = 0
var partialend = total - 1
if (range !== undefined) {
var parts = range.replace(/bytes=/, "").split("-");
var partialstart = parts[0];
var partialend = parts[1];
}
var start = parseInt(partialstart, 10);
var end = partialend ? parseInt(partialend, 10) : total; // fake a large file if no range reques
var chunksize = (end-start)+1;
resp.writeHead(206, {
'Transfer-Encoding': 'chunked'
, 'Content-Type': 'video/mp4'
, 'Content-Length': chunksize // large size to fake a file
, 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
});3) The client has to use HTML5 video tags.
I have no problems with streaming playback (using fs.createReadStream with 206 HTTP partial content) to the HTML5 client a video file previously recorded with the above FFMPEG command line (but saved to a file instead of STDOUT), so I know the FFMPEG stream is correct, and I can even correctly see the video live streaming in VLC when connecting to the HTTP node server.
However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I’m setting this up the right way.
Here is my understanding of how this should work, please correct me if I’m wrong :
1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn’t relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.
2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.
3) I don’t understand why piping the STDOUT stream to the HTTP response doesn’t work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it’s a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.
4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests : A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom ? If this is the case it won’t work for streaming as there is no MOOV file and no end of the file.
5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don’t understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn’t sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.
6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file ? I think this is the main reason for my problems however I’m not exactly sure in Node how to best set that up. And I don’t know how to handle a client request for the data at the end of the file as there is no end of file.
7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses ? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests ?
As I’m still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can’t be the first, and I think this should be able to work (somehow !).
-
FFMPEG not enough data (x y), trying to decode anyway
7 juin 2016, par Forest J. HandfordI’m trying to make videos of Direct3D games using a C# app. For non-Direct3D games I stream images from Graphics.CopyFromScreen which works. When I copy the screen from Direct3D and stream it to FFMPEG I get :
[bmp @ 00000276b0b9c280] not enough data (5070 < 129654), trying to
decode anywayAn MP4 file is created, but it is always 0 bytes.
To get screenshots from Direct3D, I am using Justin Stenning’s Direct3DHook. This produces images MUCH bigger than when I get images from Graphics.CopyFromScreen (8 MB vs 136 KB). I’ve tried increasing the buffer (-bufsize) but the number on the left of the error is not impacted.
I’ve tried resizing the image to 1/6th the original. That reduces the number on the right, but does not eliminate it. Even when the number on the right is close to what I have for Graphics.CopyFromScreen I get an error. Here is a sample of the current code :
using System;
using System.Diagnostics;
using System.Threading;
using System.Drawing;
using Capture.Hook;
using Capture.Interface;
using Capture;
using System.IO;
namespace GameRecord
{
public class Video
{
private const int VID_FRAME_FPS = 8;
private const int SIZE_MODIFIER = 6;
private const double FRAMES_PER_MS = VID_FRAME_FPS * 0.001;
private const int SLEEP_INTERVAL = 2;
private const int CONSTANT_RATE_FACTOR = 18; // Lower crf = Higher Quality https://trac.ffmpeg.org/wiki/Encode/H.264
private Image image;
private Capture captureScreen;
private int processId = 0;
private Process process;
private CaptureProcess captureProcess;
private Process launchingFFMPEG;
private string arg;
private int frame = 0;
private Size? resize = null;
/// <summary>
/// Generates the Videos by gathering frames and processing via FFMPEG.
/// </summary>
public void RecordScreenTillGameEnd(string exe, OutputDirectory outputDirectory, CustomMessageBox alertBox, Thread workerThread)
{
AttachProcess(exe);
RequestD3DScreenShot();
while (image == null) ;
Logger.log.Info("Launching FFMPEG ....");
resize = new Size(image.Width / SIZE_MODIFIER, image.Height / SIZE_MODIFIER);
// H.264 can let us do 8 FPS in high res . . . but must be licensed for commercial use.
arg = "-f image2pipe -framerate " + VID_FRAME_FPS + " -i pipe:.bmp -pix_fmt yuv420p -crf " +
CONSTANT_RATE_FACTOR + " -preset ultrafast -s " + resize.Value.Width + "x" +
resize.Value.Height + " -vcodec libx264 -bufsize 30000k -y \"" +
outputDirectory.pathToVideo + "\"";
launchingFFMPEG = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "ffmpeg",
Arguments = arg,
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardError = true
}
};
launchingFFMPEG.Start();
Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
do
{
Thread.Sleep(SLEEP_INTERVAL);
} while (workerThread.IsAlive);
Logger.log.Info("Total frames: " + frame + " Expected frames: " + (ExpectedFrames(stopWatch.ElapsedMilliseconds) - 1));
launchingFFMPEG.StandardInput.Close();
#if DEBUG
string line;
while ((line = launchingFFMPEG.StandardError.ReadLine()) != null)
{
Logger.log.Debug(line);
}
#endif
launchingFFMPEG.Close();
alertBox.Show();
}
void RequestD3DScreenShot()
{
captureProcess.CaptureInterface.BeginGetScreenshot(new Rectangle(0, 0, 0, 0), new TimeSpan(0, 0, 2), Callback, resize, (ImageFormat)Enum.Parse(typeof(ImageFormat), "Bitmap"));
}
private void AttachProcess(string exe)
{
Thread.Sleep(300);
Process[] processes = Process.GetProcessesByName(Path.GetFileNameWithoutExtension(exe));
foreach (Process currProcess in processes)
{
// Simply attach to the first one found.
// If the process doesn't have a mainwindowhandle yet, skip it (we need to be able to get the hwnd to set foreground etc)
if (currProcess.MainWindowHandle == IntPtr.Zero)
{
continue;
}
// Skip if the process is already hooked (and we want to hook multiple applications)
if (HookManager.IsHooked(currProcess.Id))
{
continue;
}
Direct3DVersion direct3DVersion = Direct3DVersion.AutoDetect;
CaptureConfig cc = new CaptureConfig()
{
Direct3DVersion = direct3DVersion,
ShowOverlay = false
};
processId = currProcess.Id;
process = currProcess;
var captureInterface = new CaptureInterface();
captureInterface.RemoteMessage += new MessageReceivedEvent(CaptureInterface_RemoteMessage);
captureProcess = new CaptureProcess(process, cc, captureInterface);
break;
}
Thread.Sleep(10);
if (captureProcess == null)
{
ShowUser.Exception("No executable found matching: '" + exe + "'");
}
}
/// <summary>
/// The callback for when the screenshot has been taken
/// </summary>
///
///
///
void Callback(IAsyncResult result)
{
using (Screenshot screenshot = captureProcess.CaptureInterface.EndGetScreenshot(result))
if (screenshot != null && screenshot.Data != null && arg != null)
{
if (image != null)
{
image.Dispose();
}
image = screenshot.ToBitmap();
// image.Save("D3DImageTest.bmp");
image.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
launchingFFMPEG.StandardInput.Flush();
frame++;
}
if (frame < 5)
{
Thread t = new Thread(new ThreadStart(RequestD3DScreenShot));
t.Start();
}
else
{
Logger.log.Info("Done getting shots from D3D.");
}
}
/// <summary>
/// Display messages from the target process
/// </summary>
///
private void CaptureInterface_RemoteMessage(MessageReceivedEventArgs message)
{
Logger.log.Info(message);
}
}
}When I search the internet for the error all I get is the FFMPEG source code, which has not proven to be illuminating. I have been able to save the image directly to disk, which makes me feel like it is not an issue with disposing the data. I have also tried only grabbing one frame, but that produces the same error, which suggests to me it is not a threading issue.
Here is the full sample of stderr :
2016-06-02 18:29:38,046 === ffmpeg version N-79143-g8ff0f6a Copyright (c) 2000-2016 the FFmpeg developers
2016-06-02 18:29:38,047 === built with gcc 5.3.0 (GCC)
2016-06-02 18:29:38,048 === configuration: --enable-gpl
--enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
2016-06-02 18:29:38,062 === libavutil 55. 19.100 / 55. 19.100
2016-06-02 18:29:38,063 === libavcodec 57. 30.100 / 57. 30.100
2016-06-02 18:29:38,064 === libavformat 57. 29.101 / 57. 29.101
2016-06-02 18:29:38,064 === libavdevice 57. 0.101 / 57. 0.101
2016-06-02 18:29:38,065 === libavfilter 6. 40.102 / 6. 40.102
2016-06-02 18:29:38,066 === libswscale 4. 0.100 / 4. 0.100
2016-06-02 18:29:38,067 === libswresample 2. 0.101 / 2. 0.101
2016-06-02 18:29:38,068 === libpostproc 54. 0.100 / 54. 0.100
2016-06-02 18:29:38,068 === [bmp @ 000002cd7e5cc280] not enough data (13070 < 8294454), trying to decode anyway
2016-06-02 18:29:38,069 === [bmp @ 000002cd7e5cc280] not enough data (13016 < 8294400)
2016-06-02 18:29:38,069 === Input #0, image2pipe, from 'pipe:.bmp':
2016-06-02 18:29:38,262 === Duration: N/A, bitrate: N/A
2016-06-02 18:29:38,262 === Stream #0:0: Video: bmp, bgra, 1920x1080, 8 tbr, 8 tbn, 8 tbc
2016-06-02 18:29:38,263 === [libx264 @ 000002cd7e5d59a0] VBV bufsize set but maxrate unspecified, ignored
2016-06-02 18:29:38,264 === [libx264 @ 000002cd7e5d59a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
2016-06-02 18:29:38,265 === [libx264 @ 000002cd7e5d59a0] profile Constrained Baseline, level 1.1
2016-06-02 18:29:38,266 === [libx264 @ 000002cd7e5d59a0] 264 - core 148 r2665 a01e339 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=8 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
2016-06-02 18:29:38,463 === Output #0, mp4, to 'C:\Users\fores\AppData\Roaming\Affectiva\n_Artifacts_20160602_182857\GameplayVidOut.mp4':
2016-06-02 18:29:38,464 === Metadata:
2016-06-02 18:29:38,465 === encoder : Lavf57.29.101
2016-06-02 18:29:38,469 === Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 320x180, q=-1--1, 8 fps, 16384 tbn, 8 tbc
2016-06-02 18:29:38,470 === Metadata:
2016-06-02 18:29:38,472 === encoder : Lavc57.30.100 libx264
2016-06-02 18:29:38,474 === Side data:
2016-06-02 18:29:38,475 === cpb: bitrate max/min/avg: 0/0/0 buffer size: 30000000 vbv_delay: -1
2016-06-02 18:29:38,476 === Stream mapping:
2016-06-02 18:29:38,477 === Stream #0:0 -> #0:0 (bmp (native) -> h264 (libx264))
2016-06-02 18:29:38,480 === [bmp @ 000002cd7e5cc9a0] not enough data (13070 < 8294454), trying to decode anyway
2016-06-02 18:29:38,662 === [bmp @ 000002cd7e5cc9a0] not enough data (13016 < 8294400)
2016-06-02 18:29:38,662 === Error while decoding stream #0:0: Invalid data found when processing input
2016-06-02 18:29:38,663 === frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
2016-06-02 18:29:38,663 === video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
2016-06-02 18:29:38,664 === Conversion failed!In memory, the current image is 320 pixels wide and 180 pixels long. The pixel format is Format32bppRgb. The horizontal and vertical resolutions seem odd, they are both 96.01199. When filed to disk here is ffprobe output for the file :
ffprobe version N-79143-g8ff0f6a Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 5.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 19.100 / 55. 19.100
libavcodec 57. 30.100 / 57. 30.100
libavformat 57. 29.101 / 57. 29.101
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 40.102 / 6. 40.102
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, png_pipe, from 'C:\Users\fores\git\game-playtest-tool\GamePlayTest\bin\x64\Debug\D3DFromCapture.bmp':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, rgba(pc), 1920x1080 [SAR 3779:3779 DAR 16:9], 25 tbr, 25 tbn, 25 tbcHere is a PNG version of an example screenshot from the current code (playing Portal 2) :
Any ideas would be greatly appreciated. My current workaround is to save the files to the HDD and compile the video after gameplay, but it’s a far less performant option. Thank you !
-
How would i create a bash script to convert .wav files from folder [A] to mp3 format . then move them to new folder [mp3folder]
14 septembre 2016, par Hayden BeasleyI want this Script to run in a loop every hour. The main goal is to convert the wav files that I export to my VM share folder when using Ableton.
Messy Idea of what I want but need lots of help with
for file in /mnt/hgfs/VMshare/transfer/*
do
if ["$file" == "/mnt/hgfs/VMshare/transfer/*.wav]
then
find -name "*.wav" -exec ffmpeg -i {} -acodec libmp3lame -ab 128k {}.mp3 \;
else
echo "NO WAV TO CONVERT"
mv /mnt/hgfs/VMshare/transfer/*.mp3 /root/Desktop/MP3Music/