Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (58)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (8513)

  • H264 codec encode, decode and write to file

    30 novembre 2020, par Алекс Аникей

    I try to use ffmpeg and h264 codec to translate the video in realtime. But at the state of decoding encoded frame, I get some "bad" image.
Init encoder and decoder :

    


        VCSession *vc_new_x264(Logger *log, ToxAV *av, uint32_t friend_number, toxav_video_receive_frame_cb *cb, void *cb_data,
                       VCSession *vc)
{

// ------ ffmpeg encoder ------
    AVCodec *codec2 = NULL;
    vc->h264_encoder_ctx = NULL;//AVCodecContext type

    codec2 = NULL;
    avcodec_register_all();
    codec2 = avcodec_find_encoder(AV_CODEC_ID_H264);
    if (codec2 == NULL)
    {
        LOGGER_WARNING(log, "h264: not find encoder");
    }

    vc->h264_encoder_ctx = avcodec_alloc_context3(codec2);

    vc->h264_out_pic2 = av_packet_alloc();

    vc->h264_encoder_ctx->bit_rate = 10 *1000 * 1000;
    vc->h264_encoder_ctx->width = 800;
    vc->h264_encoder_ctx->height = 600;

    vc->h264_enc_width = vc->h264_encoder_ctx->width;
    vc->h264_enc_height = vc->h264_encoder_ctx->height;
    vc->h264_encoder_ctx->time_base = (AVRational) {
            1, 30
    };
    vc->h264_encoder_ctx->gop_size = 30;
    vc->h264_encoder_ctx->max_b_frames = 1;
    vc->h264_encoder_ctx->pix_fmt = AV_PIX_FMT_YUV420P;


    av_opt_set(vc->h264_encoder_ctx->priv_data, "preset", "veryfast", 0);


    av_opt_set(vc->h264_encoder_ctx->priv_data, "annex_b", "1", 0);
    av_opt_set(vc->h264_encoder_ctx->priv_data, "repeat_headers", "1", 0);
    av_opt_set(vc->h264_encoder_ctx->priv_data, "tune", "zerolatency", 0);
    av_opt_set_int(vc->h264_encoder_ctx->priv_data, "zerolatency", 1, 0);

    vc->h264_encoder_ctx->time_base.num = 1;
    vc->h264_encoder_ctx->time_base.den = 1000;

    vc->h264_encoder_ctx->framerate = (AVRational) {
        1000, 40
    };

    AVDictionary *opts = NULL;

    if (avcodec_open2(vc->h264_encoder_ctx, codec2, &opts) < 0) {
        LOGGER_ERROR(log, "could not open codec H264 on encoder");
    }

    av_dict_free(&opts);



    AVCodec *codec = NULL;
    vc->h264_decoder_ctx = NULL;// AVCodecContext - type
    codec = NULL;

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);

    if (!codec) {
        LOGGER_WARNING(log, "codec not found H264 on decoder");
    }

    vc->h264_decoder_ctx = avcodec_alloc_context3(codec);

    if (codec->capabilities & AV_CODEC_CAP_TRUNCATED) {
        vc->h264_decoder_ctx->flags |= AV_CODEC_FLAG_TRUNCATED; /* we do not send complete frames */
    }

    if (codec->capabilities & AV_CODEC_FLAG_LOW_DELAY) {
        vc->h264_decoder_ctx->flags |= AV_CODEC_FLAG_LOW_DELAY;
    }

    vc->h264_decoder_ctx->flags |= AV_CODEC_FLAG2_SHOW_ALL;

     vc->h264_decoder_ctx->refcounted_frames = 0;

    vc->h264_decoder_ctx->delay = 0;
    vc->h264_decoder_ctx->sw_pix_fmt = AV_PIX_FMT_YUV420P;
    av_opt_set_int(vc->h264_decoder_ctx->priv_data, "delay", 0, AV_OPT_SEARCH_CHILDREN);
    vc->h264_decoder_ctx->time_base = (AVRational) {
            40, 1000
};
    vc->h264_decoder_ctx->framerate = (AVRational) {
        1000, 40
    };

    if (avcodec_open2(vc->h264_decoder_ctx, codec, NULL) < 0) {
        LOGGER_WARNING(log, "could not open codec H264 on decoder");
    }
    vc->h264_decoder_ctx->refcounted_frames = 0;

    return vc;
}


    


    Encoding (in this function i encode frame and for debugging decode and save him in file) :

    


    uint32_t encode_frame_h264_p(ToxAV *av, uint32_t friend_number, uint16_t width, uint16_t height,
                           const uint8_t *y,
                           const uint8_t *u, const uint8_t *v, ToxAVCall *call,
                           uint64_t *video_frame_record_timestamp,
                           int vpx_encode_flags,
                           x264_nal_t **nal,
                           int *i_frame_size)
{
    AVFrame *frame;
    int ret;
    uint32_t result = 1;

    frame = av_frame_alloc();

    frame->format = call->video->h264_encoder_ctx->pix_fmt;
    frame->width  = width;
    frame->height = height;

    ret = av_frame_get_buffer(frame, 32);

    if (ret < 0) {
        LOGGER_ERROR(av->m->log, "av_frame_get_buffer:Could not allocate the video frame data");
    }

    /* make sure the frame data is writable */
    ret = av_frame_make_writable(frame);

    if (ret < 0) {
        LOGGER_ERROR(av->m->log, "av_frame_make_writable:ERROR");
    }

    frame->pts = (int64_t)(*video_frame_record_timestamp);


    // copy YUV frame data into buffers
    memcpy(frame->data[0], y, width * height);
    memcpy(frame->data[1], u, (width / 2) * (height / 2));
    memcpy(frame->data[2], v, (width / 2) * (height / 2));

    // encode the frame
    ret = avcodec_send_frame(call->video->h264_encoder_ctx, frame);

    if (ret < 0) {
        LOGGER_ERROR(av->m->log, "Error sending a frame for encoding:ERROR");
    }


    ret = avcodec_receive_packet(call->video->h264_encoder_ctx, call->video->h264_out_pic2);



    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
        *i_frame_size = 0;
    } else if (ret < 0) {
        *i_frame_size = 0;
        // fprintf(stderr, "Error during encoding\n");
    } else {

        // Decoded encoded frame and save him to file

        saveInFile(call->video->h264_decoder_ctx, frame, call->video->h264_out_pic2, "/home/user/testSave");

        // printf("Write packet %3"PRId64" (size=%5d)\n", call->video->h264_out_pic2->pts, call->video->h264_out_pic2->size);
        // fwrite(call->video->h264_out_pic2->data, 1, call->video->h264_out_pic2->size, outfile);

        global_encoder_delay_counter++;

        if (global_encoder_delay_counter > 60) {
            global_encoder_delay_counter = 0;
            LOGGER_DEBUG(av->m->log, "enc:delay=%ld",
                         (long int)(frame->pts - (int64_t)call->video->h264_out_pic2->pts)
                        );
        }


        *i_frame_size = call->video->h264_out_pic2->size;
        *video_frame_record_timestamp = (uint64_t)call->video->h264_out_pic2->pts;

        result = 0;
    }

    av_frame_free(&frame);

    return result;

}


    


    Decode and save frame code :

    


    void saveInFile(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt, const char *filename)
{
    if (!pkt)
        return;
    char buf[1024];
    int ret;
    static int curNumber = 0;
    ret = avcodec_send_packet(dec_ctx, pkt);
    if (ret < 0 && ret != AVERROR_EOF)
    {
        fprintf(stderr, "Error sending a packet for decoding'\n");
        if ( ret == AVERROR(EAGAIN))
            return;
        if (ret == AVERROR(EINVAL))
            return;
        if (ret == AVERROR(ENOMEM))
            return;

    }

        ret = avcodec_receive_frame(dec_ctx, frame);
        if (ret == AVERROR(EAGAIN) )
            return;
        if (ret == AVERROR_EOF)
        {
            return;
        }
        else if (ret < 0)
        {
            fprintf(stderr, "Error during decoding\n");
        }
        printf("saving frame %3d\n", dec_ctx->frame_number);
        sprintf(buf, "%s%d", filename, curNumber);
        curNumber++;
        pgm_save(frame->data[0], frame->linesize[0], frame->width, frame->height, buf);

}

void pgm_save(unsigned char* buf, int wrap, int xsize, int ysize, char *filename)
{
    FILE *f;
    int i;
    f = fopen(filename, "w");
    fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);
    for (i =0; i < ysize; i++)
        fwrite(buf + i* wrap, 1, xsize, f);
    fclose(f);
}


    


    After this manipulation I have smth like that :
Bad image

    


  • use AVMutableVideoComposition rotate video after ffmpge can't get rotate info

    29 janvier 2018, par ladeng

    before the video is not rotated, I can use FFmpeg command get rotate information, command like this :

    ffprobe -v quiet -print_format json -show_format -show_streams recordVideo.mp4

    or

    ffmpeg -i recordVideo.mp4.

    when I use AVMutableVideoComposition rotate video, the video lost video rotate information, rotate video simple : RoatetVideoSimpleCode,
    code below :

    -(void)performWithAsset:(AVAsset*)asset complateBlock:(void(^)(void))complateBlock{
    AVMutableComposition *mutableComposition;
    AVMutableVideoComposition *mutableVideoComposition;
    cacheRotateVideoURL = [[NSURL alloc] initFileURLWithPath:[NSString pathWithComponents:@[NSTemporaryDirectory(), kCacheCertVideoRotate]]];

    AVMutableVideoCompositionInstruction *instruction = nil;
    AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
    CGAffineTransform t1;
    CGAffineTransform t2;

    AVAssetTrack *assetVideoTrack = nil;
    AVAssetTrack *assetAudioTrack = nil;
    // Check if the asset contains video and audio tracks
    if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
       assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
    }
    if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
       assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
    }

    CMTime insertionPoint = kCMTimeZero;
    NSError *error = nil;

    //    CGAffineTransform rotateTranslate;
    // Step 1
    // Create a composition with the given asset and insert audio and video tracks into it from the asset
    if (!mutableComposition) {

       // Check whether a composition has already been created, i.e, some other tool has already been applied
       // Create a new composition
       mutableComposition = [AVMutableComposition composition];

       // Insert the video and audio tracks from AVAsset
       if (assetVideoTrack != nil) {
           AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
           [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];

       }
       if (assetAudioTrack != nil) {
           AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
           [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
       }

    }


    // Step 2
    // Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
    //    t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
    // Rotate transformation
    //    t2 = CGAffineTransformRotate(t1, degreesToRadians(90.0));

    CGFloat degrees = 90;
    //--
    if (degrees != 0) {
       //        CGAffineTransform mixedTransform;
       if(degrees == 90){
           //90°
           t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height,0.0);
           t2 = CGAffineTransformRotate(t1,M_PI_2);
       }else if(degrees == 180){
           //180°
           t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.width, assetVideoTrack.naturalSize.height);
           t2 = CGAffineTransformRotate(t1,M_PI);
       }else if(degrees == 270){
           //270°
           t1 = CGAffineTransformMakeTranslation(0.0, assetVideoTrack.naturalSize.width);
           t2 = CGAffineTransformRotate(t1,M_PI_2*3.0);
       }
    }

    // Step 3
    // Set the appropriate render sizes and rotational transforms
    if (!mutableVideoComposition) {

       // Create a new video composition
       mutableVideoComposition = [AVMutableVideoComposition videoComposition];
       mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
       mutableVideoComposition.frameDuration = CMTimeMake(1, 30);

       // The rotate transform is set on a layer instruction
       instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
       instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
       layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
       [layerInstruction setTransform:t2 atTime:kCMTimeZero];
       //           [layerInstruction setTransform:rotateTranslate atTime:kCMTimeZero];

    } else {

       mutableVideoComposition.renderSize = CGSizeMake(mutableVideoComposition.renderSize.height, mutableVideoComposition.renderSize.width);

       // Extract the existing layer instruction on the mutableVideoComposition
       instruction = (mutableVideoComposition.instructions)[0];
       layerInstruction = (instruction.layerInstructions)[0];

       // Check if a transform already exists on this layer instruction, this is done to add the current transform on top of previous edits
       CGAffineTransform existingTransform;

       if (![layerInstruction getTransformRampForTime:[mutableComposition duration] startTransform:&existingTransform endTransform:NULL timeRange:NULL]) {
           [layerInstruction setTransform:t2 atTime:kCMTimeZero];
       } else {
           // Note: the point of origin for rotation is the upper left corner of the composition, t3 is to compensate for origin
           CGAffineTransform t3 = CGAffineTransformMakeTranslation(-1*assetVideoTrack.naturalSize.height/2, 0.0);
           CGAffineTransform newTransform = CGAffineTransformConcat(existingTransform, CGAffineTransformConcat(t2, t3));
           [layerInstruction setTransform:newTransform atTime:kCMTimeZero];
       }

    }


    // Step 4
    // Add the transform instructions to the video composition
    instruction.layerInstructions = @[layerInstruction];
    mutableVideoComposition.instructions = @[instruction];

    //write video
    if ([[NSFileManager  defaultManager] fileExistsAtPath:cacheRotateVideoURL.path]) {
       NSError *error = nil;
       BOOL removeFlag = [[NSFileManager  defaultManager] removeItemAtURL:cacheRotateVideoURL error:&error];
       SPLog(@"remove rotate file:%@ %@",cacheRotateVideoURL.path,removeFlag?@"Success":@"Failed");
    }

    AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetMediumQuality] ;

    exportSession.outputURL = cacheRotateVideoURL;
    exportSession.outputFileType = AVFileTypeMPEG4;
    exportSession.videoComposition = mutableVideoComposition;
    exportSession.shouldOptimizeForNetworkUse = YES;
    exportSession.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration);

    [exportSession exportAsynchronouslyWithCompletionHandler:^{
       SPLog(@"cache write done");
       AVAsset* asset = [AVURLAsset URLAssetWithURL: cacheRotateVideoURL options:nil];
       SPLog(@"rotate recrod video time: %lf",CMTimeGetSeconds(asset.duration));

       ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
       [library writeVideoAtPathToSavedPhotosAlbum:cacheRotateVideoURL
                                   completionBlock:^(NSURL *assetURL, NSError *error) {
                                       if (error) {
                                           NSLog(@"Save video fail:%@",error);
                                       } else {
                                           NSLog(@"Save video succeed.");
                                       }
                                   }];

       complateBlock();
    }];

    }

    can anyone tell me why this is so?
    how can I write rotate information when I rotate the video ?

  • Sequencing MIDI From A Chiptune

    28 avril 2013, par Multimedia Mike — Outlandish Brainstorms

    The feature requests for my game music appreciation website project continue to pour in. Many of them take the form of “please add player support for system XYZ and the chiptune library to go with it.” Most of these requests are A) plausible, and B) in process. I have also received recommendations for UI improvements which I take under consideration. Then there are the numerous requests to port everything from Native Client to JavaScript so that it will work everywhere, even on mobile, a notion which might take a separate post to debunk entirely.

    But here’s an interesting request about which I would like to speculate : Automatically convert a chiptune into a MIDI file. I immediately wanted to dismiss it as impossible or highly implausible. But, as is my habit, I started pondering the concept a little more carefully and decided that there’s an outside chance of getting some part of the idea to work.

    Intro to MIDI
    MIDI stands for Musical Instrument Digital Interface. It’s a standard musical interchange format and allows music instruments and computers to exchange musical information. The file interchange format bears the extension .mid and contains a sequence of numbers that translate into commands separated by time deltas. E.g. : turn key on (this note, this velocity) ; wait x ticks ; turn key off ; wait y ticks ; etc. I’m vastly oversimplifying, as usual.

    MIDI fascinated me back in the days of dialup internet and discrete sound cards (see also my write-up on the Gravis Ultrasound). Typical song-length MIDI files often ranged from a few kilobytes to a few 10s of kilobytes. They were significantly smaller than the MOD et al. family of tracker music formats mostly by virtue of the fact that MIDI files aren’t burdened by transporting digital audio samples.

    I know I’m missing a lot of details. I haven’t dealt much with MIDI in the past… 15 years or so (ever since computer audio became a blur of MP3 and AAC audio). But I’m led to believe it’s still relevant. The individual who requested this feature expressed an interest in being able to import the sequenced data into any of the many music programs that can interpret .mid files.

    The Pitch
    To limit the scope, let’s focus on music that comes from the 8-bit Nintendo Entertainment System or the original Game Boy. The former features 2 square wave channels, a triangle wave, a noise channel, and a limited digital channel. The latter creates music via 2 square waves, a wave channel, and a noise channel. The roles that these various channels usually play typically break down as : square waves represent the primary melody, triangle wave is used to simulate a bass line, noise channel approximates a variety of percussive sounds, and the DPCM/wave channels are fairly free-form. They can have random game sound effects or, if they are to assist in the music, are often used for more authentic percussive sounds.

    The various channels are controlled via an assortment of memory-mapped hardware registers. These registers are fed values such as frequency, volume, and duty cycle. My idea is to modify the music playback engine to track when various events occur. Whenever a channel is turned on or off, that corresponds to a MIDI key on or off event. If a channel is already playing but a new frequency is written, that would likely count as a note change, so log a key off event followed by a new key on event.

    There is the major obstacle of what specific note is represented by a channel in a particular state. The MIDI standard defines 128 different notes spanning 11 octaves. Empirically, I wonder if I could create a table which maps the assorted frequencies to different MIDI notes ?

    I think this strategy would only work with the square and triangle waves. Noise and digital channels ? I’m not prepared to tackle that challenge.

    Prior Work ?
    I have to wonder if there is any existing work in this area. I’m certain that people have wanted to do this before ; I wonder if anyone has succeeded ?

    Just like reverse engineering a binary program entails trying to obtain a higher level abstraction of a program from a very low level representation, this challenge feels like reverse engineering a piece of music as it is being performed and automatically expressing it in a higher level form.