Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (86)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (12498)

  • How to extract important information of an AVPacket from an encoded video

    13 juin 2017, par Sanduni Wickramasinghe

    I wrote a code to read a video in encoded domain and able to retrieve information such as size and duration of frames. AVPacket class consist of a variable as data. I can read it but since it is a bite array I can’t use it in readable format. I want to use this data for comparison with another video file. Please help.

    void CFfmpegmethods::VideoRead(){
    av_register_all();
    avformat_network_init();
    ofstream outdata;
    const char *url = "H:\\Sanduni_projects\\Sample_video.mp4";
    AVDictionary *options = NULL;
    AVFormatContext *s = avformat_alloc_context(); //NULL;

    AVPacket *pkt = new AVPacket();

    //open an input stream and read the header
    int ret = avformat_open_input(&s, url, NULL, NULL);

    //avformat_find_stream_info(s, &options); //finding the missing information

    if (ret < 0)
       abort();

    av_dict_set(&options, "video_size", "640x480", 0);
    av_dict_set(&options, "pixel_format", "rgb24", 0);

    if (avformat_open_input(&s, url, NULL, &options) < 0){
       abort();
    }

    av_dict_free(&options);

    AVDictionaryEntry *e;

    if (e = av_dict_get(options, "", NULL, AV_DICT_IGNORE_SUFFIX)) {
       fprintf(stderr, "Option %s not recognized by the demuxer.\n", e->key);
       abort();
    }

    int i = 1;
    int j = 0;
    int64_t duration = 0;
    int size = 0;
    uint8_t *data = 0; //Unsigned integer type with a width of exactly 8 bits.
    int sum = 0;

    int total_size = 0;
    int total_duration = 0;
    int packet_size = 0;
    int64_t stream_index = 0;
    int64_t bit_rate = 0;

    //writing data to a file
    outdata.open("H:\\Sanduni_projects\\log.txt");

    if (!outdata){
       cerr << "Error: file could not be opened" << endl;
       exit(1);
    }

    //Split what is stored in the file into frames and return one for each call
    //returns the next frame of the stream

    while(1){
       int frame = av_read_frame(s, pkt);
       if (frame < 0) break;

       duration = pkt->duration;
       size = pkt->size;

       total_size = total_size + size;
       total_duration = total_duration + duration;

       cout << "frame:" << i << " " << size << " " << duration << endl;
       data = pkt->data;
       outdata << "Frame: " << i << " ";
       outdata << data<< endl;

       for (j = 0; j < size; j++){

       }

       i++;
       //pkt_no++;
       //outdata << sum << endl;      
    }

    //make the packet free
    av_packet_unref(pkt);
    delete pkt;

    cout << "total size: " << total_size << endl;
    cout << "total duration:" << total_duration << endl;

    outdata.close();

    //Close the file after reading
    avformat_close_input(&s);

    }

  • Loop through images, detect if contains color, put in subfolder

    16 avril 2022, par Samo

    I have two kinds of images in my folder : One is all black, the other one is black with yellow (#f8fa27). I am trying to put all the images with yellow colour in a subfolder. But I don't know how this is applicable.

    


    I would like to implement this with ImageMagick or FFMPEG. If possible, shell is redundant and I would like the loop via CMD. If you happen to know of another option that also works, that's no problem either.

    


    I've read about https://imagemagick.org/script/fx.php but I don't know how to apply it with my poor skills.

    


    Edit for now I managed to fix it with python (shitty code but it works)

    


    import cv2
import os
import glob

#check if extension is png
for filename in glob.glob("*.png"):
    #return to folder where all images are saved
    os.chdir('C:/Users/.../.../images')
    #make image black and white
    image = cv2.imread(filename, 0)
    #if image is fully black
    if cv2.countNonZero(image) == 0:
        print ("Black image, skipped")
    else:
        #colored image
        print ("Colored image")
        #restore true colors (rgb in my case, check wiki)
        image = cv2.imread(filename, cv2.COLOR_BGR2RGB)
        #folder to save colored images
        os.chdir(os.getcwd()+"/yellow")
        #save image to subfolder
        cv2.imwrite(filename,image)


    


    Thank you :) !

    


  • moviepy black border around png when compositing into an MP4

    27 août 2022, par OneWorld

    compositing a png into an MP4 video creates a black border around the edge.

    


    This is using moviepy 1.0.0

    


    Code below reproduces the MP4 with the attached red text png.

    


    enter image description here

    


    import numpy as np
import moviepy.editor as mped
def composite_txtpng_on_colour():
    bg_color = mped.ColorClip(size=[400, 300], color=np.array([0, 255, 0]).astype(np.uint8),
                          duration=2).set_position((0, 0))
    text_png_postition = [5, 5]
    text_png = mped.ImageClip("./txtpng.png", duration=3).set_position((text_png_postition))

    canvas_size = bg_color.size
    stacked_clips = mped.CompositeVideoClip([bg_color, text_png], size=canvas_size).set_duration(2)
    stacked_clips.write_videofile('text_with_black_border_video.mp4', fps=24)

composite_txtpng_on_colour()


    


    The result is an MP4 that can be played in VLC player. A screenshot of the black edge can be seen below :-

    


    enter image description here

    


    Any suggestions to remove the black borders would be much appreciated.

    


    Update : It looks like moviepy does a blit instead of alpha compositing.

    


    def blit(im1, im2, pos=None, mask=None, ismask=False):
    """ Blit an image over another.  Blits ``im1`` on ``im2`` as position ``pos=(x,y)``, using the
    ``mask`` if provided. If ``im1`` and ``im2`` are mask pictures
    (2D float arrays) then ``ismask`` must be ``True``.
    """
    if pos is None:
        pos = [0, 0]

    # xp1,yp1,xp2,yp2 = blit area on im2
    # x1,y1,x2,y2 = area of im1 to blit on im2
    xp, yp = pos
    x1 = max(0, -xp)
    y1 = max(0, -yp)
    h1, w1 = im1.shape[:2]
    h2, w2 = im2.shape[:2]
    xp2 = min(w2, xp + w1)
    yp2 = min(h2, yp + h1)
    x2 = min(w1, w2 - xp)
    y2 = min(h1, h2 - yp)
    xp1 = max(0, xp)
    yp1 = max(0, yp)

    if (xp1 >= xp2) or (yp1 >= yp2):
        return im2

    blitted = im1[y1:y2, x1:x2]

    new_im2 = +im2

    if mask is None:
        new_im2[yp1:yp2, xp1:xp2] = blitted
    else:
        mask = mask[y1:y2, x1:x2]
        if len(im1.shape) == 3:
            mask = np.dstack(3 * [mask])
        blit_region = new_im2[yp1:yp2, xp1:xp2]
        new_im2[yp1:yp2, xp1:xp2] = (1.0 * mask * blitted + (1.0 - mask) * blit_region)
    
    return new_im2.astype('uint8') if (not ismask) else new_im2


    


    and so, Rotem is right.

    


    new_im2[yp1:yp2, xp1:xp2] = (1.0 * mask * blitted + (1.0 - mask) * blit_region)


    


    is

    


    (alpha * img_rgb + (1.0 - alpha) * bg)


    


    and this is how moviepy composites. And this is why we see black at the edges.