
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (46)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9250)
-
Unable to choose an output [closed]
17 février 2024, par XdanginI try to change the time for subtitle and every time this pop up on my face




Unable to choose an output format for 'output-subfamily' ; use a standard extension for the filename or specify the format manually




srry guys cuz I don't get it but I'm new at this


here's the full log im case I wrote something wrong


/sdcard $ ffmpeg -itsoffset 2 -i Family.Guy.S04E23.srt -c:v copy output-sub family guy.srt

ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers

 built with Android (10552028, +pgo, +bolt, +lto, -mlgo, based on r487747d) clang version 17.0.2 (https://android.googlesource.com/toolchain/llvm-project d9f89f4d16663d5012e5c09495f3b30ece3d2362)

 configuration: --arch=aarch64 --as=aarch64-linux-android-clang --cc=aarch64-linux-android-clang --cxx=aarch64-linux-android-clang++ --nm=llvm-nm --pkg-config=/home/builder/.termux-build/_cache/android-r26b-api-24-v1/bin/pkg-config --strip=llvm-strip --cross-prefix=aarch64-linux-android- --disable-indevs --disable-outdevs --enable-indev=lavfi --disable-static --disable-symver --enable-cross-compile --enable-gnutls --enable-gpl --enable-version3 --enable-jni --enable-lcms2 --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgme --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenmpt --enable-libopus --enable-librav1e --enable-libsoxr --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-mediacodec --enable-opencl --enable-shared --prefix=/data/data/com.termux/files/usr --target-os=android --extra-libs=-landroid-glob --disable-vulkan --enable-neon --disable-libfdk-aac

 libavutil 58. 29.100 / 58. 29.100

 libavcodec 60. 31.102 / 60. 31.102

 libavformat 60. 16.100 / 60. 16.100

 libavdevice 60. 3.100 / 60. 3.100

 libavfilter 9. 12.100 / 9. 12.100

 libswscale 7. 5.100 / 7. 5.100

 libswresample 4. 12.100 / 4. 12.100

 libpostproc 57. 3.100 / 57. 3.100

Input #0, srt, from 'Family.Guy.S04E23.srt':

 Duration: N/A, bitrate: N/A

 Stream #0:0: Subtitle: subrip

[AVFormatContext @ 0x75b7eb2500] Unable to choose an output format for 'output-sub'; use a standard extension for the filename or specify the format manually.

[out#0 @ 0x75b7e3e840] Error initializing the muxer for output-sub: Invalid argument

Error opening output file output-sub.

Error opening output files: Invalid argument

/sdcard $ ffmpeg -itsoffset 2 -i Family.Guy.S04E23.srt -c copy output-sub family guy.srt



I searched YouTube but didn't find anything


-
Affectiva drops every second frame
19 juin 2019, par machineryI am running Affectiva SDK 4.0 on a GoPro video recording. I’m using a C++ program on Ubuntu 16.04. The GoPro video was recorded with 60 fps. The problem is that Affectiva only provides results for around half of the frames (i.e. 30 fps). If I look at the timestamps provided by Affectiva, the last timestamp matches the video duration, that means Affectiva somehow skips around every second frame.
Before running Affectiva I was running ffmpeg with the following command to make sure that the video has a constant frame rate of 60 fps :
ffmpeg -i in.MP4 -vf -y -vcodec libx264 -preset medium -r 60 -map_metadata 0:g -strict -2 out.MP4 null 2>&1
When I inspect the presentation timestamp using
ffprobe -show_entries frame=pict_type,pkt_pts_time -of csv -select_streams v in.MP4
I’m getting for the raw video the following values :Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/media/GoPro_concat/GoPro_concat.MP4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.20.100
Duration: 01:14:46.75, start: 0.000000, bitrate: 15123 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 14983 kb/s, 59.94 fps, 59.94 tbr, 60k tbn, 119.88 tbc (default)
Metadata:
handler_name : GoPro AVC
timecode : 13:17:26:44
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s (default)
Metadata:
handler_name : GoPro AAC
Stream #0:2(eng): Data: none (tmcd / 0x64636D74)
Metadata:
handler_name : GoPro AVC
timecode : 13:17:26:44
Unsupported codec with id 0 for input stream 2
frame,0.000000,I
frame,0.016683,P
frame,0.033367,P
frame,0.050050,P
frame,0.066733,P
frame,0.083417,P
frame,0.100100,P
frame,0.116783,P
frame,0.133467,I
frame,0.150150,P
frame,0.166833,P
frame,0.183517,P
frame,0.200200,P
frame,0.216883,P
frame,0.233567,P
frame,0.250250,P
frame,0.266933,I
frame,0.283617,P
frame,0.300300,P
frame,0.316983,P
frame,0.333667,P
frame,0.350350,P
frame,0.367033,P
frame,0.383717,P
frame,0.400400,I
frame,0.417083,P
frame,0.433767,P
frame,0.450450,P
frame,0.467133,P
frame,0.483817,P
frame,0.500500,P
frame,0.517183,P
frame,0.533867,I
frame,0.550550,P
frame,0.567233,P
frame,0.583917,P
frame,0.600600,P
frame,0.617283,P
frame,0.633967,P
frame,0.650650,P
frame,0.667333,I
frame,0.684017,P
frame,0.700700,P
frame,0.717383,P
frame,0.734067,P
frame,0.750750,P
frame,0.767433,P
frame,0.784117,P
frame,0.800800,I
frame,0.817483,P
frame,0.834167,P
frame,0.850850,P
frame,0.867533,P
frame,0.884217,P
frame,0.900900,P
frame,0.917583,P
frame,0.934267,I
frame,0.950950,P
frame,0.967633,P
frame,0.984317,P
frame,1.001000,P
frame,1.017683,P
frame,1.034367,P
frame,1.051050,P
frame,1.067733,I
...I have uploaded the full output on OneDrive.
If I run Affectiva on the raw video (not processed by ffmpeg) I face the same problem of dropped frames. I was using Affectiva with
affdex::VideoDetector detector(60);
Is there a problem with the ffmpeg command or with Affectiva ?
Edit : I think I have found out where the problem could be. It seems that Affectiva is not processing the whole video but just stops after a certain amount of processed frames without any error message. Below I have posted the C++ code I’m using. In the
onProcessingFinished()
method I’m printing something to the console when the processing is finished. But this message is never printed, so Affectiva never comes to the end.Is there something wrong with my code or should I encode the videos into another format than MP4 ?
#include "VideoDetector.h"
#include "FrameDetector.h"
#include <iostream>
#include <fstream>
#include <mutex>
#include
std::mutex m;
std::condition_variable conditional_variable;
bool processed = false;
class Listener : public affdex::ImageListener {
public:
Listener(std::ofstream * fout) {
this->fout = fout;
}
virtual void onImageCapture(affdex::Frame image){
//std::cout << "called";
}
virtual void onImageResults(std::map faces, affdex::Frame image){
//std::cout << faces.size() << " faces detected:" << std::endl;
for(auto& kv : faces){
(*this->fout) << image.getTimestamp() << ",";
(*this->fout) << kv.first << ",";
(*this->fout) << kv.second.emotions.joy << ",";
(*this->fout) << kv.second.emotions.fear << ",";
(*this->fout) << kv.second.emotions.disgust << ",";
(*this->fout) << kv.second.emotions.sadness << ",";
(*this->fout) << kv.second.emotions.anger << ",";
(*this->fout) << kv.second.emotions.surprise << ",";
(*this->fout) << kv.second.emotions.contempt << ",";
(*this->fout) << kv.second.emotions.valence << ",";
(*this->fout) << kv.second.emotions.engagement << ",";
(*this->fout) << kv.second.measurements.orientation.pitch << ",";
(*this->fout) << kv.second.measurements.orientation.yaw << ",";
(*this->fout) << kv.second.measurements.orientation.roll << ",";
(*this->fout) << kv.second.faceQuality.brightness << std::endl;
//std::cout << kv.second.emotions.fear << std::endl;
//std::cout << kv.second.emotions.surprise << std::endl;
//std::cout << (int) kv.second.emojis.dominantEmoji;
}
}
private:
std::ofstream * fout;
};
class ProcessListener : public affdex::ProcessStatusListener{
public:
virtual void onProcessingException (affdex::AffdexException ex){
std::cerr << "[Error] " << ex.getExceptionMessage();
}
virtual void onProcessingFinished (){
{
std::lock_guard lk(m);
processed = true;
std::cout << "[Affectiva] Video processing finised." << std::endl;
}
conditional_variable.notify_one();
}
};
int main(int argc, char ** argsv)
{
affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);
//affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::LARGE_FACES);
std::string classifierPath="/home/wrafael/affdex-sdk/data";
detector.setClassifierPath(classifierPath);
detector.setDetectAllEmotions(true);
// Output
std::ofstream fout(argsv[2]);
fout << "timestamp" << ",";
fout << "faceId" << ",";
fout << "joy" << ",";
fout << "fear" << ",";
fout << "disgust" << ",";
fout << "sadness" << ",";
fout << "anger" << ",";
fout << "surprise" << ",";
fout << "contempt" << ",";
fout << "valence" << ",";
fout << "engagement" << ",";
fout << "pitch" << ",";
fout << "yaw" << ",";
fout << "roll" << ",";
fout << "brightness" << std::endl;
Listener l(&fout);
ProcessListener pl;
detector.setImageListener(&l);
detector.setProcessStatusListener(&pl);
detector.start();
detector.process(argsv[1]);
// wait for the worker
{
std::unique_lock lk(m);
conditional_variable.wait(lk, []{return processed;});
}
fout.flush();
fout.close();
}
</mutex></fstream></iostream>Edit 2 : I have now digged further into the problem and looked only at one GoPro file with a duration of 19min 53s (GoPro splits the recordings). When I run Affectiva with
affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);
on that raw video the following file is produced. Affectiva stops after 906s without any error message and without printing "[Affectiva] Video processing finised".When I now transform the video using
ffmpeg -i raw.MP4 -y -vcodec libx264 -preset medium -r 60 -map_metadata 0:g -strict -2 out.MP4
and then run Affectiva withaffdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);
, Affectiva runs until the end and prints
"[Affectiva] Video processing finised" but the frame rate is only at 23 fps. Here is the file.When I now run Affectiva with
affdex::VideoDetector detector(62, 1, affdex::FaceDetectorMode::SMALL_FACES);
on this transformed file, Affectiva stops after 509s and "[Affectiva] Video processing finised" is not printed. Here is the file. -
Avisynth total frames does not equal VirtualDub total frames
7 mai 2017, par CorpuscularIt appears that Dissolve and/or Fade change the total number of frames in .avs scripts. When I add up the total number of frames in the avs script and then load the avs script in Vdub the total number of frames is different. My real world example below shows a difference of 822 frames vs 1368 frames for the same script. I have run some basic tests which appear to support this hypothesis. Of course I may be doing something stupid. Any guidance would be greatly appreciated. Please let me know if I can clarify anything. Ffmpeg also borks on the same script which leads me to think this is an Avisynth issue. Or my lack of avs coding skills.
System specs :
Win7,
FFmpeg version : 20170223-dcd3418 win32 shared,
AVISynth version : 2.6Test1.avs = 200 frames long = Expected behaviour
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24, start=1, end=100)
v2=ImageReader("2.png", fps=24, start=1, end=100)
video = v1 + v2
return videoTest2.avs with return Dissolve = 195 frames long = Unexpected behaviour
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24, start=1, end=100)
v2=ImageReader("2.png", fps=24, start=1, end=100)
return Dissolve(v1, v2, 5)Test3.avs with fadeOut(fadeIn = 202 frames long = Unexpected behaviour
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24, start=1, end=100)
v2=ImageReader("2.png", fps=24, start=1, end=100)
fadeOut(fadeIn(v1 + v2, 60), 60)Test4.avs with dissolve and fade = 197 frames long = Unexpected behaviour
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24, start=1, end=100)
v2=ImageReader("2.png", fps=24, start=1, end=100)
v3 = Dissolve(v1, v2, 5)
fadeOut(fadeIn(v3, 60), 60)Test5.avs explicity specifying frame rates on dissolve and fade = 197 frames = Unexpected behaviour
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24, start=1, end=100)
v2=ImageReader("2.png", fps=24, start=1, end=100)
v3 = Dissolve(v1, v2, 5, 24)
fadeOut(fadeIn(v3, 60, $000000, 24), 60, $000000, 24)realExample = 822 frames long = Expected behaviour (this is what I want)
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24).trim(1,106)
v3=ImageReader("3.png", fps=24).trim(1,471)
v9=ImageReader("9.png", fps=24).trim(1,58)
v10=ImageReader("10.png", fps=24).trim(1,35)
v11=ImageReader("11.png", fps=24).trim(1,152)
video = v1 + v3 + v9 + v10 + v11
return videorealExample = 1368 frames long
LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\VSFilter.dll")
v1=ImageReader("1.png", fps=24).trim(1,106)
v3=ImageReader("3.png", fps=24).trim(1,471)
v9=ImageReader("9.png", fps=24).trim(1,58)
v10=ImageReader("10.png", fps=24).trim(1,35)
v11=ImageReader("11.png", fps=24).trim(1,152)
d1 = Dissolve(v1, v3, 5)
d3 = Dissolve(v3, v9, 5)
d9 = Dissolve(v9, v10, 5)
d10 = Dissolve(v10, v11, 5)
fadeOut(fadeIn(d1 + d3 + d9 + d10,60),60)