Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (102)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (10773)

  • matplotlib ArtistAnimation returns a blank video

    28 mars 2017, par Mpaull

    I’m trying to produce an animation of a networkx graph changing over time. I’m using the networkx_draw utilities to create matplotlib figures of the graph, and matplotlib’s ArtistAnimation module to create an animation from the artists networkx produces. I’ve made a minimum reproduction of what I’m doing here :

    import numpy as np
    import networkx as nx
    import matplotlib.animation as animation
    import matplotlib.pyplot as plt

    # Instantiate the graph model
    G = nx.Graph()
    G.add_edge(1, 2)

    # Keep track of highest node ID
    G.maxNode = 2

    fig = plt.figure()
    nx.draw(G)
    ims = []

    for timeStep in xrange(10):

       G.add_edge(G.maxNode,G.maxNode+1)
       G.maxNode += 1

       pos = nx.drawing.spring_layout(G)
       nodes = nx.drawing.draw_networkx_nodes(G, pos)
       lines = nx.drawing.draw_networkx_edges(G, pos)

       ims.append((nodes,lines,))
       plt.pause(.2)
       plt.cla()

    im_ani = animation.ArtistAnimation(fig, ims, interval=200,            repeat_delay=3000,blit=True)
    im_ani.save('im.mp4', metadata={'artist':'Guido'})

    The process works fine while displaying the figures live, it produces exactly the animation I want. And it even produces a looping animation in a figure at the end of the script, again what I want, which would suggest that the animation process worked. However when I open the "im.mp4" file saved to disk, it is a blank white image which runs for the expected period of time, never showing any of the graph images which were showed live.

    I’m using networkx version 1.11, and matplotlib version 2.0. I’m using ffmpeg for the animation, and am running on a Mac, OSX 10.12.3.

    What am I doing incorrectly ?

  • avcodec_encode_video returns 0 as output size

    11 juin 2014, par vacetahanna

    Hello what I am doing wrong ? When I try to encode a Frame, the out_size is 0. and if I use the avcodec_encode_video2 the return value is 0, which indicates, that everything went good, but the avpkt.size is 0 after that. What am I missing or doing wrong ? thank you so much here is my code

    int EncodeVideoFFMPEG::enc_main( void *istream, void *outstream, int width, int height )
    {

    avcodec_register_all();
    //choose codec
    AVCodec *codec = avcodec_find_encoder(CODEC_ID_H264);

    //set parameters
    AVCodecContext *c = avcodec_alloc_context3(codec);
    c->codec_type = AVMEDIA_TYPE_VIDEO;
    c->bit_rate = 50000;
    c->pix_fmt = PIX_FMT_YUV420P;
    c->width = width;
    c->height = height;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->gop_size = 20;
    c->max_b_frames = 0;

    //open
    avcodec_open2(c, codec, NULL);

    int got_packet;

    int BYTEPIC = width * height * 3;

    //prepare for changing color space
    struct SwsContext *img_convert_ctx1 =
    sws_getContext(width, height, PIX_FMT_BGR24,
    width, height, PIX_FMT_YUV420P,
    SWS_BICUBIC, NULL, NULL, NULL);

    //allocateframesforcolorspacechange
    AVFrame *pictureBGR = alloc_pictureBGR24(width, height);
    AVFrame *picture = alloc_picture420P(width, height);

    //get frame from OGRE and let pictureBGR point to it
    unsigned char *image = new unsigned char[BYTEPIC];
    memcpy(image, istream, BYTEPIC);

    //change from BGR to 420P
    sws_scale(img_convert_ctx1, &image, pictureBGR->linesize, 0, height, picture->data, picture->linesize);

    delete image;

    AVPacket avpkt;
    av_new_packet( &avpkt, BYTEPIC );

    //encode withthe codec
    int out_size = avcodec_encode_video(c, avpkt.data, avpkt.size, picture);
    //int success = avcodec_encode_video2(c, &avpkt, picture, &got_packet);  

    outstream = avpkt.data;

    return out_size;

       }
  • How to get raw frame data from AVFrame.data[] and AVFrame.linesize[] without specifying the pixel format ?

    25 janvier 2016, par vivienlwt

    I get the general idea that the frame.data[] is interpreted depending on which pixel format is the video (RGB or YUV). But is there any general way to get all the pixel data from the frame ? I just want to compute the hash of the frame data, without interpret it to display the image.

    According to AVFrame.h :

    uint8_t* AVFrame::data[AV_NUM_DATA_POINTERS]

    pointer to the picture/channel planes.

    int AVFrame::linesize[AV_NUM_DATA_POINTERS]

    For video, size in bytes of each picture line.

    Does this mean that if I just extract from data[i] for linesize[i] bytes then I get the full pixel information about the frame ?