Recherche avancée

Médias (1)

Mot : - Tags -/berlin

Autres articles (106)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Ajout d’utilisateurs manuellement par un administrateur

    12 avril 2011, par

    L’administrateur d’un canal peut à tout moment ajouter un ou plusieurs autres utilisateurs depuis l’espace de configuration du site en choisissant le sous-menu "Gestion des utilisateurs".
    Sur cette page il est possible de :
    1. décider de l’inscription des utilisateurs via deux options : Accepter l’inscription de visiteurs du site public Refuser l’inscription des visiteurs
    2. d’ajouter ou modifier/supprimer un utilisateur
    Dans le second formulaire présent un administrateur peut ajouter, (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (13934)

  • Piwik is now using Github issues as our Issue Tracker !

    9 juillet 2014, par Matthieu Aubry — Community, Development

    This is an announcement regarding the Issue Tracker used for the Piwik project. We are excited to announce that Piwik has migrated from Trac to now using Github issues for managing our issues !

    More than 5,400 tickets and 20,000+ comments from 1,000+ users were migrated to Github. Read on for more information.

    Where do I find Piwik Issue Tracker ?

    Benefits of using Github Issues for the Piwik project

    There are several advantages of moving to Github issues :

    • Faster and responsive user interface
    • Better cross-project referencing of issues
    • Ability to notify people with the @username functionality
    • No spam
    • Integration with Pull requests and our Git repository

    How do I get notifications for all Piwik tickets ?

    To receive notifications for new tickets or new comments in the Piwik project, go to github.com/piwik/piwik, then click the Watch button at the top of the page.

    In Github, watching a repository lets you follow new commits, pull requests, and issues that are created.

    How do I report a bug in Piwik ?
    See Submitting a bug report.

    How do I suggest a new feature ?
    See Submitting a feature request.

    Next steps

    At Piwik we care a lot about Data ownership. For this reason we need to have an up to date copy of all our tickets and comments out of github.com servers. Our next step will be to create and release as open source a tool to let anyone create a Mirror of their Github issues. See #5299.

    For more information about the Trac->migration, see #5273.

    We look forward to reading your issues on Github !

  • Publish RTMP stream to Red5 Server form iOS camera

    7 septembre 2015, par Mohammad Asif

    Please look at following code, I have transformed CMSampleBufferRef into AV_CODEC_ID_H264 but I don’t know how to transmit it to Red5 server.

    Thanks,

    - (void)  captureOutput:(AVCaptureOutput *)captureOutput
     didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
        fromConnection:(AVCaptureConnection *)connection {

    //NSLog(@"This is working ....");

     // [connection setVideoOrientation: [self deviceOrientation] ];

    if( !CMSampleBufferDataIsReady(sampleBuffer) )
    {
       NSLog( @"sample buffer is not ready. Skipping sample" );
       return;
    } else {

       if (captureOutput == videoOutput) {


           CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
           CVPixelBufferLockBaseAddress(pixelBuffer, 0);

           // access the data
           float width = CVPixelBufferGetWidth(pixelBuffer);
           float height = CVPixelBufferGetHeight(pixelBuffer);

           //float bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
           unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);


           // Convert the raw pixel base to h.264 format

           if (codec == nil) {

               codec = 0;
               context = 0;
               frame = 0;

               fmt = avformat_alloc_context();
               //avformat_write_header(fmt, NULL);

               codec = avcodec_find_encoder(AV_CODEC_ID_H264);

               if (codec == 0) {
                   NSLog(@"Codec not found!!");
                   return;
               }

               context = avcodec_alloc_context3(codec);

               if (!context) {
                   NSLog(@"Context no bueno.");
                   return;
               }

               // Bit rate
               context->bit_rate = 400000; // HARD CODE
               context->bit_rate_tolerance = 10;
               // Resolution

               // Frames Per Second
               context->time_base = (AVRational) {1,25};
               context->gop_size = 1;
               //context->max_b_frames = 1;
               context->width = width;
               context->height = height;
               context->pix_fmt = PIX_FMT_YUV420P;

               // Open the codec
               if (avcodec_open2(context, codec, 0) < 0) {
                   NSLog(@"Unable to open codec");
                   return;
               }

               // Create the frame
               frame = av_frame_alloc();
               if (!frame) {
                   NSLog(@"Unable to alloc frame");
                   return;
               }
           }

           context->width = width;
           context->height = height;

           frame->format = context->pix_fmt;
           frame->width = context->width;
           frame->height = context->height;

           //int nbytes = avpicture_get_size(context->pix_fmt, context->width, context->height);
           //uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
    //            AVFrame *pFrameDecoded = avcodec_alloc_frame();
    //            int num_bytes2 = avpicture_get_size(context->pix_fmt, frame->width, frame->height);
    //            uint8_t* frame2_buffer2 = (uint8_t *)av_malloc(num_bytes2 * sizeof(uint8_t));
    //            avpicture_fill((AVPicture*)pFrameDecoded, frame2_buffer2, PIX_FMT_YUVJ422P, 320, 240);

           frame->pts = (1.0 / 30) * 60 * count;
           avpicture_fill((AVPicture *) frame, rawPixelBase, context->pix_fmt, frame->width, frame->height);

           int got_output = 0;
           av_init_packet(&packet);
           //avcodec_encode_video2(context, &packet, frame, &got_output);

           do {
               avcodec_encode_video2(context, &packet, frame, &got_output);
               //avcodec_decode_video2(context, &packet, NULL, &got_output);
               //*... handle received packet*/

               if (isFirstPacket) {
                   [rtmp sendCreateStreamPacket];
                   isFirstPacket = false;
                   //av_dump_format(fmt, 0, [kRtmpEP UTF8String], 1);
                   avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", [kRtmpEP UTF8String]); //RTMP
               }

               packet.stream_index = ofmt_ctx->nb_streams;
               av_interleaved_write_frame(ofmt_ctx, &packet);
               count ++;

               //[rtmp write:[NSData dataWithBytes:packet.data length:packet.size]];


           } while(got_output);

           // Unlock the pixel data
           CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

           //[rtmp write:[NSData dataWithBytes:packet.data length:packet.size]];

       } else {

       }
    }   }  
  • lavf : Add WebM DASH Manifest Muxer

    14 juillet 2014, par Vignesh Venkatasubramanian
    lavf : Add WebM DASH Manifest Muxer
    

    This patch adds the ability to generate WebM DASH manifest XML using
    ffmpeg. A sample command line would be as follows :

    ffmpeg \
    -f webm_dash_manifest -i video1.webm \
    -f webm_dash_manifest -i video2.webm \
    -f webm_dash_manifest -i audio1.webm \
    -f webm_dash_manifest -i audio2.webm \
    -map 0 -map 1 -map 2 -map 3 \
    -c copy \
    -f webm_dash_manifest \
    -adaptation_sets “id=0,streams=0,1 id=1,streams=2,3” \
    manifest.xml

    It works by exporting necessary fields as metadata tags in matroskadec
    and use those values to write the appropriate XML fields as per the WebM
    DASH Specification [1]. Some ideas are adopted from webm-tools project
    [2].

    [1]
    https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/webm-dash-specification
    [2]
    https://chromium.googlesource.com/webm/webm-tools/+/master/webm_dash_manifest/

    Signed-off-by : Vignesh Venkatasubramanian <vigneshv@google.com>
    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] Changelog
    • [DH] RELEASE_NOTES
    • [DH] doc/muxers.texi
    • [DH] libavformat/Makefile
    • [DH] libavformat/allformats.c
    • [DH] libavformat/version.h
    • [DH] libavformat/webmdashenc.c