Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (65)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (9500)

  • How to using every 5 sec generate video output File Path to Encode with RTMP Formate write data in ios ? [on hold]

    16 juillet 2015, par Sandeep Joshi
    (void) segmentRecording:(NSTimer*)timer {
    if (!shouldBeRecording) {
       [timer invalidate];
    }
    AVAssetWriter *tempAssetWriter = self.assetWriter;
    AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
    AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
    self.assetWriter = queuedAssetWriter;
    self.audioEncoder = queuedAudioEncoder;
    self.videoEncoder = queuedVideoEncoder;
    NSLog(@"Switching encoders");

    dispatch_async(segmentingQueue, ^{
       if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
           @try {
               [tempAudioEncoder markAsFinished];
               [tempVideoEncoder markAsFinished];
               [tempAssetWriter finishWritingWithCompletionHandler:^{
                   if (tempAssetWriter.status == AVAssetWriterStatusFailed) {
                       [self showError:tempAssetWriter.error];
                   } else {
                       [self uploadLocalURL:tempAssetWriter.outputURL];
                   }
               }];
           }
           @catch (NSException *exception) {
               NSLog(@"Caught exception: %@", [exception description]);
               //[BugSenseController logException:exception withExtraData:nil];
           }
       }
       self.segmentCount++;
       if (self.readyToRecordAudio && self.readyToRecordVideo) {
           NSError *error = nil;
           self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[OWUtilities urlForRecordingSegmentCount:segmentCount basePath:self.basePath] fileType:(NSString *)kUTTypeMPEG4 error:&error];
           if (error) {
               [self showError:error];
           }
           self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
           self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
           //NSLog(@"Encoder switch finished");
       }
    });}



    (void) uploadLocalURL:(NSURL*)url {
    NSLog(@"upload local url: %@", url);
    NSString *inputPath = [url path];
    NSString *outputPath = [inputPath stringByReplacingOccurrencesOfString:@".mp4" withString:@".ts"];
    NSString *outputFileName = [outputPath lastPathComponent];
    NSDictionary *options = @{kFFmpegOutputFormatKey: @"mpegts"};
    NSLog(@"%@ conversion...", outputFileName);
    [ffmpegWrapper convertInputPath:[url path] outputPath:outputPath options:options progressBlock:nil completionBlock:^(BOOL success, NSError *error) {
       if (success) {
           if (!isRtmpConnected) {
               isRtmpConnected = [rtmp openWithURL:HOST_URL enableWrite:YES];
           }
           isRtmpConnected = [rtmp isConnected];

           if (isRtmpConnected) {

               NSData *video = [NSData dataWithContentsOfURL:[NSURL URLWithString:outputPath]];
               NSUInteger length = [video length];
               NSUInteger chunkSize = 1024 * 5;;
               NSUInteger offset = 0;
               NSLog(@"original video length: %lu \n chunkSize : %lu", length,chunkSize);
             // Let's split video to small chunks to publish to media server
               do {
                   NSUInteger thisChunkSize = length - offset > chunkSize ? chunkSize : length - offset;
                   NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[video bytes] + offset
                                                        length:thisChunkSize
                                                  freeWhenDone:NO];
                   offset += thisChunkSize;

                   // Write new chunk to rtmp server
                   NSLog(@"%lu", (unsigned long)[rtmp write:chunk]);
                   sleep(1);
               } while (offset < length);
           }else{
               [rtmp close];
           }


       } else {
           NSLog(@"conversion error: %@", error.userInfo);
       }
    }];}

    This code use for live streaming for send data using RTMP Wrapper.
    Not write in Socket properly because every 5 second to generate different file output file.

    This is proper way ?

    I have no idea how to get NSData in proper way.

    Please help me .

  • FFmpeg - MJPEG decoding gives inconsistent values

    28 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine. I am sure this is something to do with the libav decoding process because the MD5 values are consistent between the images and the video :

    ffmpeg -i %06d.JPEG -f framemd5 -
    ffmpeg -i vid.avi -f framemd5 -

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone bitbucket code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give ’—filetype png’ to test the png decoding).

  • How do I upscale an iOS App Preview video to 1080 x 1920 ? [closed]

    12 avril 2024, par Benjamin Thiel

    I just captured a video of my new app running on an iPhone 6 using QuickTime Player and a Lightning cable. Afterwards I created an App Preview project in iMovie, exported it and could successfully upload it to iTunes Connect.

    



    Apple requires developers to upload App Previews in different resolutions dependent on screen size, namely :

    



      

    • iPhone 5(S) : 1080 x 1920 or 640 x 1136
    • 


    • iPhone 6 : 750 x 1334 (what I have)
    • 


    • iPhone 6+ : 1080 x 1920
    • 


    



    Obviously, 1080 x 1920 is killing two birds with one stone. I know that upscaling isn't the perfect solution, but it's meeting my needs. Since I don't own a 6+, another recording session won't do the trick.

    



    Unfortunately, iTunes Connect is extremely picky about what to accept. Here's what I tried, to no avail :

    



      

    • Handbrake, iMovie, QuickTime do not support upscaling
    • 


    • MPEG Streamclip
    • 


    • ffmpeg -i input.mp4 -acodec copy -vf scale=1080:1920 output.mp4
    • 


    



    Strangely enough, iTunes Connect keeps complaining about the wrong resolution when I try to upload the output.mp4 of ffmpeg.