Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9104)

  • Compiling ffmpeg for iOS and gas-preprocessor.pl

    16 mai 2017, par user500

    I want to compile ffmpeg for iOS. I did it a few times before. But now I’m on clean new Mavericks and on configure I’m always getting

    Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
    Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
    GNU assembler not found, install gas-preprocessor

    If you think configure made a mistake, make sure you are using the latest
    version from Git.  If the latest version fails, report the problem to the
    ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
    Include the log file "config.log" produced by configure as this will help
    solving the problem.

    I have current Xcode installed. Also brews. And current gas-preprocessor.pl (https://github.com/yuvi/gas-preprocessor) in usr/bin and also in usr/local/bin.


    On perl /usr/bin/gas-preprocessor.pl gcc I’m getting Unrecognized input filetype at /usr/bin/gas-preprocessor.pl line 33.


    This config works :

    ./configure \
    --extra-cflags='-arch arm64 -mios-version-min=7.0 -mthumb' \
    --extra-ldflags='-arch arm64 -mios-version-min=7.0' \
    --enable-cross-compile \
    --arch=arm64 \
    --target-os=darwin \
    --cc=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang \
    --sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
    --prefix=arm64 \
    --disable-doc \
    --disable-shared \
    --disable-everything \
    --enable-static \
    --enable-pic \
    --disable-muxers \
    --enable-muxer=flv \
    --disable-demuxers \
    --enable-demuxer=h264 \
    --enable-demuxer=pcm_s16le \
    --disable-devices \
    --disable-parsers \
    --enable-parser=h264 \
    --disable-encoders \
    --enable-encoder=aac \
    --disable-decoders \
    --enable-decoder=h264 \
    --enable-decoder=pcm_s16le \
    --disable-protocols \
    --enable-protocol=rtmp \
    --disable-filters \
    --disable-bsfs

    This config throws error above (GNU assembler not found, install gas-preprocessor) :

    ./configure \
    --cpu=cortex-a8 \
    --extra-cflags='-arch armv7 -mios-version-min=7.0 -mthumb' \
    --extra-ldflags='-arch armv7 -mios-version-min=7.0' \
    --enable-cross-compile \
    --arch=armv7 \
    --target-os=darwin \
    --cc=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang \
    --sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk \
    --prefix=armv7 \
    --disable-doc \
    --disable-shared \
    --disable-everything \
    --enable-static \
    --enable-pic \
    --disable-muxers \
    --enable-muxer=flv \
    --disable-demuxers \
    --enable-demuxer=h264 \
    --enable-demuxer=pcm_s16le \
    --disable-devices \
    --disable-parsers \
    --enable-parser=h264 \
    --disable-encoders \
    --enable-encoder=aac \
    --disable-decoders \
    --enable-decoder=h264 \
    --enable-decoder=pcm_s16le \
    --disable-protocols \
    --enable-protocol=rtmp \
    --disable-filters \
    --disable-bsfs
  • How to using every 5 sec generate video output File Path to Encode with RTMP Formate write data in ios ? [on hold]

    16 juillet 2015, par Sandeep Joshi
    (void) segmentRecording:(NSTimer*)timer {
    if (!shouldBeRecording) {
       [timer invalidate];
    }
    AVAssetWriter *tempAssetWriter = self.assetWriter;
    AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
    AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
    self.assetWriter = queuedAssetWriter;
    self.audioEncoder = queuedAudioEncoder;
    self.videoEncoder = queuedVideoEncoder;
    NSLog(@"Switching encoders");

    dispatch_async(segmentingQueue, ^{
       if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
           @try {
               [tempAudioEncoder markAsFinished];
               [tempVideoEncoder markAsFinished];
               [tempAssetWriter finishWritingWithCompletionHandler:^{
                   if (tempAssetWriter.status == AVAssetWriterStatusFailed) {
                       [self showError:tempAssetWriter.error];
                   } else {
                       [self uploadLocalURL:tempAssetWriter.outputURL];
                   }
               }];
           }
           @catch (NSException *exception) {
               NSLog(@"Caught exception: %@", [exception description]);
               //[BugSenseController logException:exception withExtraData:nil];
           }
       }
       self.segmentCount++;
       if (self.readyToRecordAudio && self.readyToRecordVideo) {
           NSError *error = nil;
           self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[OWUtilities urlForRecordingSegmentCount:segmentCount basePath:self.basePath] fileType:(NSString *)kUTTypeMPEG4 error:&error];
           if (error) {
               [self showError:error];
           }
           self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
           self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
           //NSLog(@"Encoder switch finished");
       }
    });}



    (void) uploadLocalURL:(NSURL*)url {
    NSLog(@"upload local url: %@", url);
    NSString *inputPath = [url path];
    NSString *outputPath = [inputPath stringByReplacingOccurrencesOfString:@".mp4" withString:@".ts"];
    NSString *outputFileName = [outputPath lastPathComponent];
    NSDictionary *options = @{kFFmpegOutputFormatKey: @"mpegts"};
    NSLog(@"%@ conversion...", outputFileName);
    [ffmpegWrapper convertInputPath:[url path] outputPath:outputPath options:options progressBlock:nil completionBlock:^(BOOL success, NSError *error) {
       if (success) {
           if (!isRtmpConnected) {
               isRtmpConnected = [rtmp openWithURL:HOST_URL enableWrite:YES];
           }
           isRtmpConnected = [rtmp isConnected];

           if (isRtmpConnected) {

               NSData *video = [NSData dataWithContentsOfURL:[NSURL URLWithString:outputPath]];
               NSUInteger length = [video length];
               NSUInteger chunkSize = 1024 * 5;;
               NSUInteger offset = 0;
               NSLog(@"original video length: %lu \n chunkSize : %lu", length,chunkSize);
             // Let's split video to small chunks to publish to media server
               do {
                   NSUInteger thisChunkSize = length - offset > chunkSize ? chunkSize : length - offset;
                   NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[video bytes] + offset
                                                        length:thisChunkSize
                                                  freeWhenDone:NO];
                   offset += thisChunkSize;

                   // Write new chunk to rtmp server
                   NSLog(@"%lu", (unsigned long)[rtmp write:chunk]);
                   sleep(1);
               } while (offset < length);
           }else{
               [rtmp close];
           }


       } else {
           NSLog(@"conversion error: %@", error.userInfo);
       }
    }];}

    This code use for live streaming for send data using RTMP Wrapper.
    Not write in Socket properly because every 5 second to generate different file output file.

    This is proper way ?

    I have no idea how to get NSData in proper way.

    Please help me .

  • FFmpeg - MJPEG decoding gives inconsistent values

    28 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine. I am sure this is something to do with the libav decoding process because the MD5 values are consistent between the images and the video :

    ffmpeg -i %06d.JPEG -f framemd5 -
    ffmpeg -i vid.avi -f framemd5 -

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone bitbucket code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give ’—filetype png’ to test the png decoding).