
Recherche avancée
Autres articles (56)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (11881)
-
x86/af_afir : use three operand form forat some instructions
4 janvier 2019, par James Almer -
Publish RTMP stream to Red5 Server form iOS camera
7 septembre 2015, par Mohammad AsifPlease look at following code, I have transformed
CMSampleBufferRef
intoAV_CODEC_ID_H264
but I don’t know how to transmit it to Red5 server.Thanks,
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSLog(@"This is working ....");
// [connection setVideoOrientation: [self deviceOrientation] ];
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( @"sample buffer is not ready. Skipping sample" );
return;
} else {
if (captureOutput == videoOutput) {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// access the data
float width = CVPixelBufferGetWidth(pixelBuffer);
float height = CVPixelBufferGetHeight(pixelBuffer);
//float bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// Convert the raw pixel base to h.264 format
if (codec == nil) {
codec = 0;
context = 0;
frame = 0;
fmt = avformat_alloc_context();
//avformat_write_header(fmt, NULL);
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (codec == 0) {
NSLog(@"Codec not found!!");
return;
}
context = avcodec_alloc_context3(codec);
if (!context) {
NSLog(@"Context no bueno.");
return;
}
// Bit rate
context->bit_rate = 400000; // HARD CODE
context->bit_rate_tolerance = 10;
// Resolution
// Frames Per Second
context->time_base = (AVRational) {1,25};
context->gop_size = 1;
//context->max_b_frames = 1;
context->width = width;
context->height = height;
context->pix_fmt = PIX_FMT_YUV420P;
// Open the codec
if (avcodec_open2(context, codec, 0) < 0) {
NSLog(@"Unable to open codec");
return;
}
// Create the frame
frame = av_frame_alloc();
if (!frame) {
NSLog(@"Unable to alloc frame");
return;
}
}
context->width = width;
context->height = height;
frame->format = context->pix_fmt;
frame->width = context->width;
frame->height = context->height;
//int nbytes = avpicture_get_size(context->pix_fmt, context->width, context->height);
//uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
// AVFrame *pFrameDecoded = avcodec_alloc_frame();
// int num_bytes2 = avpicture_get_size(context->pix_fmt, frame->width, frame->height);
// uint8_t* frame2_buffer2 = (uint8_t *)av_malloc(num_bytes2 * sizeof(uint8_t));
// avpicture_fill((AVPicture*)pFrameDecoded, frame2_buffer2, PIX_FMT_YUVJ422P, 320, 240);
frame->pts = (1.0 / 30) * 60 * count;
avpicture_fill((AVPicture *) frame, rawPixelBase, context->pix_fmt, frame->width, frame->height);
int got_output = 0;
av_init_packet(&packet);
//avcodec_encode_video2(context, &packet, frame, &got_output);
do {
avcodec_encode_video2(context, &packet, frame, &got_output);
//avcodec_decode_video2(context, &packet, NULL, &got_output);
//*... handle received packet*/
if (isFirstPacket) {
[rtmp sendCreateStreamPacket];
isFirstPacket = false;
//av_dump_format(fmt, 0, [kRtmpEP UTF8String], 1);
avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", [kRtmpEP UTF8String]); //RTMP
}
packet.stream_index = ofmt_ctx->nb_streams;
av_interleaved_write_frame(ofmt_ctx, &packet);
count ++;
//[rtmp write:[NSData dataWithBytes:packet.data length:packet.size]];
} while(got_output);
// Unlock the pixel data
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
//[rtmp write:[NSData dataWithBytes:packet.data length:packet.size]];
} else {
}
} } -
Combine multiple images to form a strip of images ffmpeg
10 décembre 2019, par phuong doI wish to combine multiple images into a single strip of images, using FFMPEG.
I have been trying to search this thing on google, but unable to find anything useful. All links take me to places where multiple images are combined to give a video output.
Assuming that all the files are of the same width and height, how can I join them to get a single strip of images. Can anybody help me ?
example image