
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (45)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (5385)
-
OpenCV [[mjpeg @ 0000000000428480] overread 8] during reading a frame from camera
8 octobre 2015, par wojcientyI have a very annoying OpenCV error, that i can’t understand, and handle with.
I write an application which gets mjpg’s stream from ip camera, and process it, but when i try to load image from stream, sometimes i have[mjpeg @ 0000000000428480] overread 8
error, and i don’t know why.
Even if i try to skip this issue, and try to load next frame from the stream, the application stucks onframeStatus = cameraHandler->read(mat);
This is code for connection establishing :
void ImageProcessor::connectWithCamera(VideoCapture * cameraHandler) {
if (cameraHandler != nullptr) {
Logger::log("Closing existing camera stream.");
cameraHandler->release();
delete cameraHandler;
}
Logger::log("Camera configuration and connection establishing.");
cameraHandler = new VideoCapture();
cameraHandler->set(CV_CAP_PROP_FRAME_WIDTH, config.RESOLUTION_WIDTH);
cameraHandler->set(CV_CAP_PROP_FRAME_HEIGHT, config.RESOLUTION_HEIGHT);
cameraHandler->set(CV_CAP_PROP_FPS, config.CAMERA_FPS);
cameraHandler->set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
while (!cameraHandler->open(config.LINK)) {
Logger::log("Cannot connect to camera! Trying again.");
}
}And this is code for capturing images :
void ImageProcessor::start() {
VideoCapture * cameraHandler = new VideoCapture();
this->connectWithCamera(cameraHandler);
this->connectWithServer(this->serverConnection);
Logger::log("Id sending.");
serverConnection->send(config.TOKEN + "\n");
Logger::log("Computations starting.");
Mat mat;
Result * result = nullptr;
int delta = 1000 / cameraHandler->get(CV_CAP_PROP_FPS);
char frameErrorCounter = 0;
bool frameStatus;
while (true) {
frameStatus = false;
cv::waitKey(delta);
try {
frameStatus = cameraHandler->read(mat);
} catch (std::exception& e) {
std::string message = e.what();
Logger::log("Critical camera error! : " + message);
}
if (!frameStatus) {
Logger::log("Cannot read a frame from source. ");
++frameErrorCounter;
if (!cameraHandler->isOpened() || frameErrorCounter >= this->GET_FRAME_ERROR_COUNTER) {
Logger::log("Probably camera is disconnected. Trying to establish connection again.");
frameErrorCounter = 0;
this->connectWithCamera(cameraHandler);
Logger::log("Computations starting.");
}
continue;
}
result = processImage(mat);
std::string stringResult;
if (result == nullptr) {
stringResult = this->NO_RESULT;
delete result;
result = nullptr;
} else {
stringResult = result->toJson();
}
if (!serverConnection->send(stringResult)) {
Logger::log("Server connection lost, trying to establish it again.");
serverConnection->close();
while (!serverConnection->isOpen()) {
this->connectWithServer(serverConnection);
}
}
mat.release();
}
}Thanks in advance !
-
Encoding for HTTP Live Streaming with Xuggle
26 mai 2012, par Luuk D. JansenI have created a server system based on Xuggle to encode an incoming file to H264 and segment it. However, when playing the video back in Quicktime it almost works (with a small hiccup in the audio sometimes) but when changing fro one quality stream to another the image gets lost.
So I ran the 'mediastreamvalidator'and got the following error :
ERROR : (-1) Unknown video codec : 1836069494 (program 0, track 0)
ERROR : (-1) failed to parse segment as either an MPEG-2 TS or an ESSo I used FFMPEG to get some info on the codex :
The result of my Xuggler encoding :Input #0, mpegts, from 'segment_0.ts':
Duration: 00:00:09.40, start: 0.000000, bitrate: 3618 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0.0[0x100]: Video: mpeg2video (Main), yuv420p, 960x540 [PAR 1:1 DAR 16:9], 104857 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0.1[0x101]: Audio: mp2, 48000 Hz, stereo, s16, 128 kb/sThe result of a file created by Compressor :
Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 25.00 (25/1)
Input #0, mpegts, from 'fileSequence1.ts':
Duration: 00:00:09.97, start: 19.984578, bitrate: 5308 kb/s
Program 1
Stream #0.0[0x101]: Video: h264 (Main), yuv420p, 960x540, 25 tbr, 90k tbn, 180k tbc
Stream #0.1[0x102]: Audio: aac, 22050 Hz, stereo, s16, 32 kb/sThe main difference seems to me that for the Xuggler encoded file it says Video : mpeg2video instead of h264. However, while encoding I did specifically set the Coder to ICodec.ID.CODEC_ID_H264.
How can I force it to use h264. The same with audio. I specified AAC and get MP2.
I subsequent used FFMPEG directly and that results in :
Input #0, mpegts, from 'encoded.ts':
Duration: 00:00:24.16, start: 1.400000, bitrate: 360 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0.0[0x100]: Video: h264 (Main), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0.1[0x101](eng): Audio: aac, 48000 Hz, stereo, s16, 57 kb/sThat looks better. I could use FFMPEG directly, but by using Xuggler I can segment the file while easier keep track of progress of the process.
-
Fluent-FFMPEG : Failed to configure output pad on Parsed_concat_0 [duplicate]
16 septembre 2020, par SuoMerging videos with Fluent-FFMPEG


Environment


- 

- NodeJS v10
- Fluent-FFMPEG






Description


I'm currently trying to make a webservice merging videos with Fluent-FFMPEG.


It's quite simple :


INPUT


You have N Videos with the same resolution, format, codec (both audio and video), framerate and bitrate (also for audio and video).


prepareAndExecuteMergeProcess(entity, files) {
 let mergeProcess = ffmpeg()
 for (let file in files) {
 let video = files[file];
 mergeProcess.addInput(this.getInputTempDir(video)); //InputTempDir is the temporary folder where the video is stored
 }
 mergeProcess.mergeToFile(this.getOutputTempDir(entity), this.getProcessTempDir(entity));

 mergeProcess.on('error', (err, stdout, stderr) => {
 console.error('An error occurred: ' + err.message);
 console.log("ffmpeg stdout:\n" + stdout);
 console.log("ffmpeg stderr:\n" + stderr);
 })
 mergeProcess.on('progress', (progress) => {
 console.log("Merging... : " + progress.percent + "%");
 });
 mergeProcess.on('end', () => {
 console.info('Merging finished !');
 })
}



VIDEO FORMAT


All the videos that are used by this service were converted by this process beforehand :


let process = ffmpeg({source: this.getInputTempDir(entity)});
 process.output(this.getOutputTempDir(entity))
 .size(entity.profile.resolution)
 .format(entity.profile.format)
 .audioBitrate(128)
 .aspect('16:9')
 .autopad()
 .run()




OUTPUT


This is what I get :


[Parsed_concat_0 @ 000001c190524300] Input link in1:v0 parameters (size 720x480, SAR 0:1) do not match the corresponding output link in0:v0 parameters (720x480, SAR 404:405)
[Parsed_concat_0 @ 000001c190524300] Failed to configure output pad on Parsed_concat_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #1:0
[aac @ 000001c19043b740] Qavg: 9636.813
[aac @ 000001c19043b740] 2 frames left in the queue on closing



Am I missing something ? I'm quite lost on this one and the documentation isn't really clear about those cases....


Thank you in advance