
Recherche avancée
Autres articles (97)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (10412)
-
Revision 34808 : class url pour le lien et non org (site VS société, merci tetue)
31 janvier 2010, par brunobergot@… — Logclass url pour le lien et non org (site VS société, merci tetue)
-
Make AVI file from H264 compressed data
6 avril 2017, par vominhtien961476I’m using ffmpeg libraries to create a AVI file and following the
muxing.c ffmpeg
example as below- Allocate the output media context :
avformat_alloc_output_context2
-
Add video streams using the
AV_CODEC_ID_H264
codec with below set of parameters :int AddVideoStream(AVStream *&video_st, AVFormatContext *&oc, AVCodec **codec, enum AVCodecID codec_id){
AVCodecContext *c;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id); //codec id = AV_CODEC_ID_H264
if (!(*codec)) {
sprintf(strError , "Could not find encoder for '%s' line %d\n", avcodec_get_name(codec_id), __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
video_st = avformat_new_stream(oc, *codec);
if (!video_st) {
sprintf(strError , "Could not allocate stream line %d\n", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
video_st->id = oc->nb_streams-1;
c = video_st->codec;
avcodec_get_context_defaults3(c, *codec);
c->codec_id = codec_id;
c->bit_rate = 500*1000;
/* Resolution must be a multiple of two. */
c->width = 1280;
c->height = 720;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
c->time_base.den = 25*1000;
c->time_base.num = 1000;
c->gop_size = 12;//(int)(av_q2d(c->time_base) / 2); // GOP size is framerate/2
c->pix_fmt = STREAM_PIX_FMT;
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return RS_OK; -
Open Video stream :
open_video
int open_video( AVFormatContext *oc, AVCodec *codec, AVStream *st ){
int ret;
AVCodecContext *c = st->codec;
char strError[STR_LENGTH_256];
/* open the codec */
ret = avcodec_open2(c, codec, NULL);
if (ret < 0) {
sprintf(strError , "Could not open video codec line %d", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
/* allocate and init a re-usable frame */
frame = avcodec_alloc_frame();
if (!frame) {
sprintf(strError , "Could not allocate video frame line %d", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
/* Allocate the encoded raw picture. */
ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
if (ret < 0) {
sprintf(strError , "Could not allocate picture line %d", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
/* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required
* output format. */
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
if (ret < 0) {
sprintf(strError , "Could not allocate temporary picture line %d", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
}
/* copy data and linesize picture pointers to frame */
*((AVPicture *)frame) = dst_picture;
return RS_OK; -
Write AVI stream header :
avformat_write_header
-
Encode video frame :
avcodec_encode_video2
Case a : The input here are
BRG frames
so I encode them to H264 and pass to the next step.Case b : The input here are
H264 compressed frames (these frames captured from H264 RTP stream)
so I leave this step then move to next step. -
Write Interleave Video frame :
av_interleaved_write_frame(oc, &pkt)
Case a : Writing the packet data encoded from step 5 correctly without error.
Case b : I Always get error from
av_interleaved_write_frame
with value -22. It could be EINVAL invalid argument. So someone can tell me what is wrong ? or Some parameters I was missing here.int WriteVideoFrame(AVFormatContext *&oc, AVStream *&st,
uint8_t *imageData `/*BRG data input*/`,
int width,
int height,
bool isStart,
bool isData,
bool isCompressed,
AVPacket* packet `/*H264 data input*/`)if (isCompressed == false)// For BRG data
static struct SwsContext *sws_ctx;
AVCodecContext *c = st->codec;
if (isData)
{
if (!frame) {
//fprintf(stderr, "Could not allocate video frame\n");
return RS_NOT_OK;
}
if (isStart == true)
frame->pts = 0;
/* Allocate the encoded raw picture. */
if (width != c->width || height != c->height)
{
if (!sws_ctx)
{
sws_ctx = sws_getContext(width, height,
AV_PIX_FMT_BGR24, c->width, c->height,
AV_PIX_FMT_YUV420P, SWS_FAST_BILINEAR, 0, 0, 0);
if (!sws_ctx)
{
sprintf(strError, "Could not initialize the conversion context line %d\n", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
}
uint8_t * inData[1] = { imageData }; // RGB24 have one plane
int inLinesize[1] = { 3 * width }; // RGB stride
sws_scale(sws_ctx, inData, inLinesize, 0, height, dst_picture.data, dst_picture.linesize);
}
else
BRG24ToYUV420p(dst_picture.data, imageData, width, height); //Phong Le changed this
}
if (oc->oformat->flags & AVFMT_RAWPICTURE)
{
/* Raw video case - directly store the picture in the packet */
AVPacket pkt;
av_init_packet(&pkt);
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = st->index;
pkt.data = dst_picture.data[0];
pkt.size = sizeof(AVPicture);
ret = av_interleaved_write_frame(oc, &pkt);
av_free_packet(&pkt);
}
else
{
/* encode the image */
AVPacket pkt;
int got_output;
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
sprintf(strError, "Error encoding video frame line %d\n", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
av_free_packet(&pkt);
return RS_NOT_OK;
}
/* If size is zero, it means the image was buffered. */
if (got_output) {
if (c->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
}
else
{
ret = 0;
}
av_free_packet(&pkt);
}
if (ret != 0)
{
sprintf(strError, "Error while writing video frame line %d\n", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
frame->pts += av_rescale_q(1, st->codec->time_base, st->time_base);
return RS_OK;else /H264 data/
if (isStart == true)
packet->pts = 0;
else
packet->pts += av_rescale_q(1, st->codec->time_base, st->time_base);
ret = av_interleaved_write_frame(oc, packet);
if (ret < 0)
{
sprintf(strError, "Error while writing video frame line %d\n", __LINE__);
commonGlobal->WriteRuntimeBackupLogs(strError);
return RS_NOT_OK;
}
return RS_OK;
- Close file.
-> Case a : Creating AVI file successful.
-> Case b : Fail.
Thanks
Tien Vo - Allocate the output media context :
-
lavf/srtdec : rewrite parsing logic
22 décembre 2015, par Clément Bœschlavf/srtdec : rewrite parsing logic
Fixes Ticket #5032
The samples in Ticket #5032 is using \r\r\n as line breaks. Since we
already are handling \r, or \n, or \r\n as line breaks, \r\n\n will be
considered as a double line breaks. This is an issue because
ff_subtitles_read_text_chunk() will as a result stop extracting a chunk
after just one line.So instead of parsing the SRT by "chunks" (which means splitting every
double LB), this new parser is detecting timing lines, and split the
events on this basis. While this sounds safe and simple, it needs to
take into account the event number preceding the timing line while
handling situations such as :- event number starting at 0 or actually any number instead of 1
- event numbers not being ordered at all
- event number being followed by text garbage (this really happened,
see Ticket #4898)
- event payload containing one or multiple number (a protagonist saying
a count-down, a date or whatever) which could be confused with a
chapter number
- event number being empty (see Ticket #2167)
- all kind of weird line breaks can appear randomly like wild pokémons
- untrustable line breaks (Ticket #5032)The sample madness.srt tries to sum up most of this into one sample,
ticket5032-rrn.srt is the file containing \r\r\n line breaks. and
empty-events-2167.srt contains empty events.