
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (45)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8552)
-
How to using every 5 sec generate video output File Path to Encode with RTMP Formate write data in ios ? [on hold]
16 juillet 2015, par Sandeep Joshi(void) segmentRecording:(NSTimer*)timer {
if (!shouldBeRecording) {
[timer invalidate];
}
AVAssetWriter *tempAssetWriter = self.assetWriter;
AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
self.assetWriter = queuedAssetWriter;
self.audioEncoder = queuedAudioEncoder;
self.videoEncoder = queuedVideoEncoder;
NSLog(@"Switching encoders");
dispatch_async(segmentingQueue, ^{
if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
@try {
[tempAudioEncoder markAsFinished];
[tempVideoEncoder markAsFinished];
[tempAssetWriter finishWritingWithCompletionHandler:^{
if (tempAssetWriter.status == AVAssetWriterStatusFailed) {
[self showError:tempAssetWriter.error];
} else {
[self uploadLocalURL:tempAssetWriter.outputURL];
}
}];
}
@catch (NSException *exception) {
NSLog(@"Caught exception: %@", [exception description]);
//[BugSenseController logException:exception withExtraData:nil];
}
}
self.segmentCount++;
if (self.readyToRecordAudio && self.readyToRecordVideo) {
NSError *error = nil;
self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[OWUtilities urlForRecordingSegmentCount:segmentCount basePath:self.basePath] fileType:(NSString *)kUTTypeMPEG4 error:&error];
if (error) {
[self showError:error];
}
self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
//NSLog(@"Encoder switch finished");
}
});}
(void) uploadLocalURL:(NSURL*)url {
NSLog(@"upload local url: %@", url);
NSString *inputPath = [url path];
NSString *outputPath = [inputPath stringByReplacingOccurrencesOfString:@".mp4" withString:@".ts"];
NSString *outputFileName = [outputPath lastPathComponent];
NSDictionary *options = @{kFFmpegOutputFormatKey: @"mpegts"};
NSLog(@"%@ conversion...", outputFileName);
[ffmpegWrapper convertInputPath:[url path] outputPath:outputPath options:options progressBlock:nil completionBlock:^(BOOL success, NSError *error) {
if (success) {
if (!isRtmpConnected) {
isRtmpConnected = [rtmp openWithURL:HOST_URL enableWrite:YES];
}
isRtmpConnected = [rtmp isConnected];
if (isRtmpConnected) {
NSData *video = [NSData dataWithContentsOfURL:[NSURL URLWithString:outputPath]];
NSUInteger length = [video length];
NSUInteger chunkSize = 1024 * 5;;
NSUInteger offset = 0;
NSLog(@"original video length: %lu \n chunkSize : %lu", length,chunkSize);
// Let's split video to small chunks to publish to media server
do {
NSUInteger thisChunkSize = length - offset > chunkSize ? chunkSize : length - offset;
NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[video bytes] + offset
length:thisChunkSize
freeWhenDone:NO];
offset += thisChunkSize;
// Write new chunk to rtmp server
NSLog(@"%lu", (unsigned long)[rtmp write:chunk]);
sleep(1);
} while (offset < length);
}else{
[rtmp close];
}
} else {
NSLog(@"conversion error: %@", error.userInfo);
}
}];}This code use for live streaming for send data using RTMP Wrapper.
Not write in Socket properly because every 5 second to generate different file output file.This is proper way ?
I have no idea how to get NSData in proper way.
Please help me .
-
Compiling ffmpeg for iOS and gas-preprocessor.pl
16 mai 2017, par user500I want to compile ffmpeg for iOS. I did it a few times before. But now I’m on clean new Mavericks and on configure I’m always getting
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
GNU assembler not found, install gas-preprocessor
If you think configure made a mistake, make sure you are using the latest
version from Git. If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "config.log" produced by configure as this will help
solving the problem.I have current Xcode installed. Also brews. And current
gas-preprocessor.pl
(https://github.com/yuvi/gas-preprocessor) inusr/bin
and also inusr/local/bin
.
On
perl /usr/bin/gas-preprocessor.pl gcc
I’m gettingUnrecognized input filetype at /usr/bin/gas-preprocessor.pl line 33.
This config works :
./configure \
--extra-cflags='-arch arm64 -mios-version-min=7.0 -mthumb' \
--extra-ldflags='-arch arm64 -mios-version-min=7.0' \
--enable-cross-compile \
--arch=arm64 \
--target-os=darwin \
--cc=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
--prefix=arm64 \
--disable-doc \
--disable-shared \
--disable-everything \
--enable-static \
--enable-pic \
--disable-muxers \
--enable-muxer=flv \
--disable-demuxers \
--enable-demuxer=h264 \
--enable-demuxer=pcm_s16le \
--disable-devices \
--disable-parsers \
--enable-parser=h264 \
--disable-encoders \
--enable-encoder=aac \
--disable-decoders \
--enable-decoder=h264 \
--enable-decoder=pcm_s16le \
--disable-protocols \
--enable-protocol=rtmp \
--disable-filters \
--disable-bsfs
This config throws error above (GNU assembler not found, install gas-preprocessor) :
./configure \
--cpu=cortex-a8 \
--extra-cflags='-arch armv7 -mios-version-min=7.0 -mthumb' \
--extra-ldflags='-arch armv7 -mios-version-min=7.0' \
--enable-cross-compile \
--arch=armv7 \
--target-os=darwin \
--cc=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
--prefix=armv7 \
--disable-doc \
--disable-shared \
--disable-everything \
--enable-static \
--enable-pic \
--disable-muxers \
--enable-muxer=flv \
--disable-demuxers \
--enable-demuxer=h264 \
--enable-demuxer=pcm_s16le \
--disable-devices \
--disable-parsers \
--enable-parser=h264 \
--disable-encoders \
--enable-encoder=aac \
--disable-decoders \
--enable-decoder=h264 \
--enable-decoder=pcm_s16le \
--disable-protocols \
--enable-protocol=rtmp \
--disable-filters \
--disable-bsfs -
How to generate video as fast as possible with subtitles and audio on node.js + ffmpeg ?
12 septembre 2018, par DSereginIntro :
We receive from the site some pieces of text
Pieces arrive to node.js-serverAt the output we need to get a video, merged from all the pieces of text, voiced by the machine voice, with the added subtitles and audio substrate. So that user could be share this video in the social networks. MKV format doesn`t supported by VK.com
The options that we have tried :
1. Get all the text at once, generate the entire speech, create a file with subtitles, burn subtitles in the video .mp4 (vk.com does not support the .mkv container). It took 12 seconds of operations for a 45-second video on the local computer.
2. Generate audio and video files for each piece of text (with added subtitles). It took one second for one piece of text. At the final request, we merge all pieces together. The last request (merging) took 2-3 seconds, which is already bearable.The second variant looks acceptable in terms of speed, but if you run 50 clients at the same time, then the computer (tested on a MacBook PRO 2013, 2.4 GHz i7, 8gb 1600 Mhz DDR3, SSD 256gb) processed only 1 piece from 1 client in 60 seconds (60 times slower), then the computer hung tight.
- Generation of audio is done through Amazon Polly
- Generate subtitles via https://github.com/gsantiago/subtitle.js#readme
- From node.js we call ffmpeg using https://github.com/fluent-ffmpeg/node-fluent-ffmpeg
- Resolution for video - 1280x720
The commands we used :
- Burn video subtitles and trim up to conditional 6 seconds (in the code send unix timestamp)
ffmpeg -i import / back.mov -i export_0 / tmp.srt -scodec mov_text -t 6 export_0 / output.mov
- Merging all audio
ffmpeg -i audio1.mp3 .... -i audio15.mp3 merged.mp3
- Overlay audio-substrate on the text
ffmpeg -i merged.mp3 -i back.mp3 -filter_complex amerge -ac 2-c: a libmp3lame -q: a 4 -shortest audio.mp3
- Merging all videos
ffmpeg -i video.txt -f concat -c copy video.mp4
- Overlay audio on video
ffmpeg -i audio.mp3 -i video.mp4 -i test.mp4 -i export / output.mp3 -c: v copy -c: a aac -map 0: v: 0 -map 1: a: 0 -shortest output .mp4
Questions that torment :
-
Is it faster ?
-
Can I use other codecs or methods of gluing without re-encoding ?
-
Try to call ffmpeg directly without a wrapper ? (in fact, it gives 50-100 ms of speed)
-
Try not to save to disk, and write data to Stream and have them glue together in the end ?