
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (73)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (9210)
-
Catch if the Java process crashed
10 mai 2018, par Victor SheyanovI run java process to convert video using ffmpeg.exe.
Runtime rt = Runtime.getRuntime();
String cmd = FFMPEGFULLPATH + " -y -i " + '"' + mpeg4File + '"' + " -vcodec libx264 -vsync 2 " + '"' + H264file + '"';
Process pr = rt.exec(cmd);
ThreadedTranscoderIO errorHandler = new ThreadedTranscoderIO(pr.getErrorStream(), "Error Stream");
errorHandler.start();
ThreadedTranscoderIO inputHandler = new ThreadedTranscoderIO(pr.getInputStream(), "Output Stream");
inputHandler.start();
try {
pr.waitFor();
} catch (InterruptedException e) {
LiveApplication.logger.info("Some shit happens during convertation 2 ");
throw new IOException("UseTranscoderBlocking - Run_FFMPEG - process interrupted " + e);
}But when the process started, sometimes especially with big files, but not always i get this windows message :
This happens only on Windows server 2008 and didn’t happened on Windows 7.
I have 2 questions :
- Why this process fails ?
- Can I catch this fail in Java, close
this window and continue thread execution (maybe I’ll restart this
proccess).
-
C++ ffmpeg video missing frames and won't play in Quicktime
5 décembre 2019, par Oliver DainI wrote some C++ code that uses ffmpeg to encode a video. I’m having two strange issues :
- The final video is always missing 1 frame. That is, if I have it encode 10 frames the final video only has 9 (at least that’s what
ffprobe -show_frames -pretty $VIDEO | grep -F '[FRAME]' | wc -l
tells me. - The final video plays fine in some players (mpv and vlc) but not in Quicktime. Quicktime just shows a completely black screen.
My code is roughly this (modified a bit to remove types that are unique to our code base) :
First, I open the video file, write the headers and initialize things :
template <class ptrt="ptrt">
using UniquePtrWithDeleteFunction = std::unique_ptr>;
std::unique_ptr<ffmpegencodingframesink> FfmpegEncodingFrameSink::Create(
const std::string& dest_url) {
AVFormatContext* tmp_format_ctxt;
auto alloc_format_res = avformat_alloc_output_context2(&tmp_format_ctxt, nullptr, "mp4", dest_url.c_str());
if (alloc_format_res < 0) {
throw FfmpegException("Error opening output file.");
}
auto format_ctxt = UniquePtrWithDeleteFunction<avformatcontext>(
tmp_format_ctxt, CloseAvFormatContext);
AVStream* out_stream_video = avformat_new_stream(format_ctxt.get(), nullptr);
if (out_stream_video == nullptr) {
throw FfmpegException("Could not create outputstream");
}
auto codec_context = GetCodecContext(options);
out_stream_video->time_base = codec_context->time_base;
auto ret = avcodec_parameters_from_context(out_stream_video->codecpar, codec_context.get());
if (ret < 0) {
throw FfmpegException("Failed to copy encoder parameters to outputstream");
}
if (!(format_ctxt->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&format_ctxt->pb, dest_url.c_str(), AVIO_FLAG_WRITE);
if (ret < 0) {
throw VideoDecodeException("Could not open output file: " + dest_url);
}
}
ret = avformat_init_output(format_ctxt.get(), nullptr);
if (ret < 0) {
throw FfmpegException("Unable to initialize the codec.");
}
ret = avformat_write_header(format_ctxt.get(), nullptr);
if (ret < 0) {
throw FfmpegException("Error occurred writing format header");
}
return std::unique_ptr<ffmpegencodingframesink>(
new FfmpegEncodingFrameSink(std::move(format_ctxt), std::move(codec_context)));
}
</ffmpegencodingframesink></avformatcontext></ffmpegencodingframesink></class>Then, every time I get a new frame to encode I pass it to this function (the frames are being decoded via ffmpeg from another mp4 file which Quicktime plays just fine) :
// If frame == nullptr then we're done and we're just flushing the encoder
// otherwise encode an actual frame
void FfmpegEncodingFrameSink::EncodeAndWriteFrame(
const AVFrame* frame) {
auto ret = avcodec_send_frame(codec_ctxt_.get(), frame);
if (ret < 0) {
throw FfmpegException("Error encoding the frame.");
}
AVPacket enc_packet;
enc_packet.data = nullptr;
enc_packet.size = 0;
av_init_packet(&enc_packet);
do {
ret = avcodec_receive_packet(codec_ctxt_.get(), &enc_packet);
if (ret == AVERROR(EAGAIN)) {
CHECK(frame != nullptr);
break;
} else if (ret == AVERROR_EOF) {
CHECK(frame == nullptr);
break;
} else if (ret < 0) {
throw FfmpegException("Error putting the encoded frame into the packet.");
}
assert(ret == 0);
enc_packet.stream_index = 0;
LOG(INFO) << "Writing packet to stream.";
av_interleaved_write_frame(format_ctxt_.get(), &enc_packet);
av_packet_unref(&enc_packet);
} while (ret == 0);
}Finally, in my destructor I close everything up like so :
FfmpegEncodingFrameSink::~FfmpegEncodingFrameSink() {
// Pass a nullptr to EncodeAndWriteFrame so it flushes the encoder
EncodeAndWriteFrame(nullptr);
// write mp4 trailer
av_write_trailer(format_ctxt_.get());
}If I run this passing
n
frames toEncodeAndWriteFrame
lineLOG(INFO) << "Writing packet to stream.";
gets runn
times indicating then
packets were written to the stream. Butffprobe
always shows onlyn - 1
frames int he video. And the final video doesn’t play on quicktime.What am I doing wrong ??
- The final video is always missing 1 frame. That is, if I have it encode 10 frames the final video only has 9 (at least that’s what
-
How to fetch both live video frame and timestamp from ffmpeg to python on Windows
8 mai 2018, par vijiboySearching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?
Expected : It should write video frames to disk as well as print timestamp details.
Actual : It writes video files but does not get the timestamp (showinfo) details.Here’s the code I tried :
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...