
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (67)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...)
Sur d’autres sites (9953)
-
Compiling ffmpeg for iOS and gas-preprocessor.pl
16 mai 2017, par user500I want to compile ffmpeg for iOS. I did it a few times before. But now I’m on clean new Mavericks and on configure I’m always getting
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
GNU assembler not found, install gas-preprocessor
If you think configure made a mistake, make sure you are using the latest
version from Git. If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "config.log" produced by configure as this will help
solving the problem.I have current Xcode installed. Also brews. And current
gas-preprocessor.pl
(https://github.com/yuvi/gas-preprocessor) inusr/bin
and also inusr/local/bin
.
On
perl /usr/bin/gas-preprocessor.pl gcc
I’m gettingUnrecognized input filetype at /usr/bin/gas-preprocessor.pl line 33.
This config works :
./configure \
--extra-cflags='-arch arm64 -mios-version-min=7.0 -mthumb' \
--extra-ldflags='-arch arm64 -mios-version-min=7.0' \
--enable-cross-compile \
--arch=arm64 \
--target-os=darwin \
--cc=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
--prefix=arm64 \
--disable-doc \
--disable-shared \
--disable-everything \
--enable-static \
--enable-pic \
--disable-muxers \
--enable-muxer=flv \
--disable-demuxers \
--enable-demuxer=h264 \
--enable-demuxer=pcm_s16le \
--disable-devices \
--disable-parsers \
--enable-parser=h264 \
--disable-encoders \
--enable-encoder=aac \
--disable-decoders \
--enable-decoder=h264 \
--enable-decoder=pcm_s16le \
--disable-protocols \
--enable-protocol=rtmp \
--disable-filters \
--disable-bsfs
This config throws error above (GNU assembler not found, install gas-preprocessor) :
./configure \
--cpu=cortex-a8 \
--extra-cflags='-arch armv7 -mios-version-min=7.0 -mthumb' \
--extra-ldflags='-arch armv7 -mios-version-min=7.0' \
--enable-cross-compile \
--arch=armv7 \
--target-os=darwin \
--cc=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
--prefix=armv7 \
--disable-doc \
--disable-shared \
--disable-everything \
--enable-static \
--enable-pic \
--disable-muxers \
--enable-muxer=flv \
--disable-demuxers \
--enable-demuxer=h264 \
--enable-demuxer=pcm_s16le \
--disable-devices \
--disable-parsers \
--enable-parser=h264 \
--disable-encoders \
--enable-encoder=aac \
--disable-decoders \
--enable-decoder=h264 \
--enable-decoder=pcm_s16le \
--disable-protocols \
--enable-protocol=rtmp \
--disable-filters \
--disable-bsfs -
How to using every 5 sec generate video output File Path to Encode with RTMP Formate write data in ios ? [on hold]
16 juillet 2015, par Sandeep Joshi(void) segmentRecording:(NSTimer*)timer {
if (!shouldBeRecording) {
[timer invalidate];
}
AVAssetWriter *tempAssetWriter = self.assetWriter;
AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
self.assetWriter = queuedAssetWriter;
self.audioEncoder = queuedAudioEncoder;
self.videoEncoder = queuedVideoEncoder;
NSLog(@"Switching encoders");
dispatch_async(segmentingQueue, ^{
if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
@try {
[tempAudioEncoder markAsFinished];
[tempVideoEncoder markAsFinished];
[tempAssetWriter finishWritingWithCompletionHandler:^{
if (tempAssetWriter.status == AVAssetWriterStatusFailed) {
[self showError:tempAssetWriter.error];
} else {
[self uploadLocalURL:tempAssetWriter.outputURL];
}
}];
}
@catch (NSException *exception) {
NSLog(@"Caught exception: %@", [exception description]);
//[BugSenseController logException:exception withExtraData:nil];
}
}
self.segmentCount++;
if (self.readyToRecordAudio && self.readyToRecordVideo) {
NSError *error = nil;
self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[OWUtilities urlForRecordingSegmentCount:segmentCount basePath:self.basePath] fileType:(NSString *)kUTTypeMPEG4 error:&error];
if (error) {
[self showError:error];
}
self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
//NSLog(@"Encoder switch finished");
}
});}
(void) uploadLocalURL:(NSURL*)url {
NSLog(@"upload local url: %@", url);
NSString *inputPath = [url path];
NSString *outputPath = [inputPath stringByReplacingOccurrencesOfString:@".mp4" withString:@".ts"];
NSString *outputFileName = [outputPath lastPathComponent];
NSDictionary *options = @{kFFmpegOutputFormatKey: @"mpegts"};
NSLog(@"%@ conversion...", outputFileName);
[ffmpegWrapper convertInputPath:[url path] outputPath:outputPath options:options progressBlock:nil completionBlock:^(BOOL success, NSError *error) {
if (success) {
if (!isRtmpConnected) {
isRtmpConnected = [rtmp openWithURL:HOST_URL enableWrite:YES];
}
isRtmpConnected = [rtmp isConnected];
if (isRtmpConnected) {
NSData *video = [NSData dataWithContentsOfURL:[NSURL URLWithString:outputPath]];
NSUInteger length = [video length];
NSUInteger chunkSize = 1024 * 5;;
NSUInteger offset = 0;
NSLog(@"original video length: %lu \n chunkSize : %lu", length,chunkSize);
// Let's split video to small chunks to publish to media server
do {
NSUInteger thisChunkSize = length - offset > chunkSize ? chunkSize : length - offset;
NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[video bytes] + offset
length:thisChunkSize
freeWhenDone:NO];
offset += thisChunkSize;
// Write new chunk to rtmp server
NSLog(@"%lu", (unsigned long)[rtmp write:chunk]);
sleep(1);
} while (offset < length);
}else{
[rtmp close];
}
} else {
NSLog(@"conversion error: %@", error.userInfo);
}
}];}This code use for live streaming for send data using RTMP Wrapper.
Not write in Socket properly because every 5 second to generate different file output file.This is proper way ?
I have no idea how to get NSData in proper way.
Please help me .
-
FFmpeg - MJPEG decoding gives inconsistent values
28 décembre 2016, par ahmadhI have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :
ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi
When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :
AVFormatContext* fmt_ctx = NULL;
AVCodecContext* cdc_ctx = NULL;
AVCodec* vid_cdc = NULL;
int ret;
unsigned int height, width;
....
// read_nframes is the number of frames to read
output_arr = new unsigned char [height * width * 3 *
sizeof(unsigned char) * read_nframes];
avcodec_open2(cdc_ctx, vid_cdc, NULL);
int num_bytes;
uint8_t* buffer = NULL;
const AVPixelFormat out_format = AV_PIX_FMT_RGB24;
num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));
AVFrame* vid_frame = NULL;
vid_frame = av_frame_alloc();
AVFrame* conv_frame = NULL;
conv_frame = av_frame_alloc();
av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
out_format, width, height, 1);
struct SwsContext *sws_ctx = NULL;
sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
width, height, out_format,
SWS_BILINEAR, NULL,NULL,NULL);
int frame_num = 0;
AVPacket vid_pckt;
while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
if (ret < 0)
break;
ret = avcodec_receive_frame(cdc_ctx, vid_frame);
if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
break;
if (ret >= 0) {
// convert image from native format to planar GBR
sws_scale(sws_ctx, vid_frame->data,
vid_frame->linesize, 0, vid_frame->height,
conv_frame->data, conv_frame->linesize);
unsigned char* r_ptr = output_arr +
(height * width * sizeof(unsigned char) * 3 * frame_num);
unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
unsigned int pxl_i = 0;
for (unsigned int r = 0; r < height; ++r) {
uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
for (unsigned int c = 0; c < width; ++c) {
r_ptr[pxl_i] = avframe_r[0];
g_ptr[pxl_i] = avframe_r[1];
b_ptr[pxl_i] = avframe_r[2];
avframe_r += 3;
++pxl_i;
}
}
++frame_num;
if (frame_num >= read_nframes)
break;
}
}
...In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine. I am sure this is something to do with the libav decoding process because the MD5 values are consistent between the images and the video :
ffmpeg -i %06d.JPEG -f framemd5 -
ffmpeg -i vid.avi -f framemd5 -In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone bitbucket code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give ’—filetype png’ to test the png decoding).