
Recherche avancée
Autres articles (88)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6789)
-
multithreaded client/server listener using ffmpeg to record video
29 janvier 2014, par user1895639I've got a python project where I need to trigger start/stop of two Axis IP cameras using ffmpeg. I've gotten bits and pieces of this to work but can't put the whole thing together. A "listener" program runs on one machine that can accept messages from other machines to start and stop recordings.
The listener responds to two commands only :
START v :/video_dir/myvideo.mov
STOPThe START command is followed by the full path of a video file that it will record.
When receiving a STOP command, the video recording should stop.I am using ffmpeg to attach to cameras, and manually doing this works :
ffmpeg.exe -i rtsp ://cameraip/blah/blah -vcodec copy -acodec copy -y c :\temp\output.mov
I can attach to the stream and upon hitting 'q' I can stop the recording.
What I'd like to be able to do is relatively simple, I just can't wrap my head around it :
Listener listens
When it receives a START signal, it spawns two processes to start recording from each camera
When it receives a STOP signal, it sends the 'q' keystroke to each process to tell ffmpeg to stop recording.I've got the listener part, but I'm just not sure how to get the multithreaded part down :
while True:
client,address = s.accept()
data = client.recv( size )
if data:
if data.startswith('START'):
# start threads here
elif data.startswith('STOP'):
# how to send a stop to the newly-created processes?In the thread code I'm doing this (which may be very incorrect) :
subprocess.call('ffmpeg.exe -i "rtsp://cameraipstuff -vcodec copy -acodec copy -t 3600 -y '+filename)
I can get this process to spawn off and I see it recording, but how can I send it a "q" message ? I can use a Queue to pass a stop message and then do something like
win32com.client.Dispatch('WScript.Shell').SendKeys('q')
but that seems awkward. Perhaps a pipe and sending q to stdin ? Regardless, I'm pretty sure using threads is the right approach (as opposed to calling subprocess.call('ffmpeg.exe ...') twice in a row), but I just don't know how to tie things together.
-
doc/example : Add http multi-client example code
25 juillet 2015, par Stephan Holljes -
How to use FFMPEG API to decode to client allocated memory
25 mars 2020, par VorpalSwordI’m trying to use the FFMPEG API to decode into a buffer defined by the client program by following the tips in this question but using the new pattern for decoding instead of the now deprecated
avcodec_decode_video2
function.If my input file is an I-frame only format, everything works great. I’ve tested with a .mov file encoded with v210 (uncompressed).
However, if the input is a long-GoP format (I’m trying with H.264 high profile 4:2:2 in an mp4 file) I get the following pleasingly psychedelic/impressionistic result :
There’s clearly something motion-vectory going on here !
And if I let FFMPEG manage its own buffers with the H.264 input by not overriding
AVCodecContext::get_buffer2
, I can make a copy from the resulting frame to my desired destination buffer and get good results.Here’s my decoder method,
_frame
and_codecCtx
are object members of typeAVFrame*
andAVCodecContext*
respectively. They get alloc’d and init’d in the constructor.virtual const DecodeResult decode(const rv::sz_t toggle) override {
_toggle = toggle & 1;
using Flags_e = DecodeResultFlags_e;
DecodeResult ans(Flags_e::kNoResult);
AVPacket pkt; // holds compressed data
::av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
int ret;
// read the compressed frame to decode
_err = av_read_frame(_fmtCtx, &pkt);
if (_err < 0) {
if (_err == AVERROR_EOF) {
ans.set(Flags_e::kEndOfFile);
_err = 0; // we can safely ignore EOF errors
return ans;
} else {
baleOnFail(__PRETTY_FUNCTION__);
}
}
// send (compressed) packets to the decoder until it produces an uncompressed frame
do {
// sender
_err = ::avcodec_send_packet(_codecCtx, &pkt);
if (_err < 0) {
if (_err == AVERROR_EOF) {
_err = 0; // EOFs are ok
ans.set(Flags_e::kEndOfFile);
break;
} else {
baleOnFail(__PRETTY_FUNCTION__);
}
}
// receiver
ret = ::avcodec_receive_frame (_codecCtx, _frame);
if (ret == AVERROR(EAGAIN)) {
continue;
} else if (ret == AVERROR_EOF) {
ans.set(Flags_e::kEndOfFile);
break;
} else if (ret < 0) {
_err = ret;
baleOnFail(__PRETTY_FUNCTION__);
} else {
ans.set(Flags_e::kGotFrame);
}
av_packet_unref (&pkt);
} while (!ans.test(Flags_e::kGotFrame));
//packFrame(); <-- used to copy to client image
return ans;
}And here’s my override for
get_buffer2
int getVideoBuffer(struct AVCodecContext* ctx, AVFrame* frm) {
// ensure frame pointers are all null
if (frm->data[0] || frm->data[1] || frm->data[2] || frm->data[3]){
::strncpy (_errMsg, "non-null frame data pointer detected.", AV_ERROR_MAX_STRING_SIZE);
return -1;
}
// get format descriptor, ensure it's valid.
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(static_cast<avpixelformat>(frm->format));
if (!desc) {
::strncpy (_errMsg, "Pixel format descriptor not available.", AV_ERROR_MAX_STRING_SIZE);
return AVERROR(EINVAL);
}
// for Video, extended data must point to the same place as data.
frm->extended_data = frm->data;
// set the data pointers to point at the Image data.
int chan = 0;
IMG* img = _imgs[_toggle];
// initialize active channels
for (; chan < 3; ++chan) {
frm->buf[chan] = av_buffer_create (
static_cast(img->begin(chan)),
rv::unsigned_cast<int>(img->size(chan)),
Player::freeBufferCallback, // callback does nothing
reinterpret_cast(this),
0 // i.e. AV_BUFFER_FLAG_READONLY is not set
);
frm->linesize[chan] = rv::unsigned_cast<int>(img->stride(chan));
frm->data[chan] = frm->buf[chan]->data;
}
// zero out inactive channels
for (; chan < AV_NUM_DATA_POINTERS; ++chan) {
frm->data[chan] = NULL;
frm->linesize[chan] = 0;
}
return 0;
}
</int></int></avpixelformat>I can reason that the codec needs to keep reference frames in memory and so I’m not really surprised that this isn’t working, but I’ve not been able to figure out how to have it deliver clean decoded frames to client memory. I thought that
AVFrame::key_frame
would have been a clue, but, after observing its behaviour in gdb, it doesn’t provide a useful trigger for when to allocateAVFrame::buf
s from the buffer pool and when they can be initialized to point at client memory.Grateful for any help !