Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (25)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (6475)

  • Video from images in Python

    26 février 2018, par R. Patterson

    I can draw a series of images using plt.draw() and plt.pause() so it produces something similar to an animation in the python window. I have modified each of the images with various labels, drawings etc.

    import numpy as np
    from PIL import Image
    import matplotlib.pyplot as plt
    import math

    def display(Intensity):
       l = plt.Line2D(Intensity[0],Intensity[1],color='yellow') #draw ROI/IAL
       ax = plt.gca()
       ax.add_line(l)
       plt.axis('off')
       plt.pause(0.05)
       plt.draw()
       plt.clf()

    #rotate region of interest
    def rotate(origin,Intensity,increment):
       ox, oy = origin #coordinates of centre or rotation
       x_points=[]
       y_points=[]
       angle=math.radians(increment)#change in angle between each image
       for i in range(0,len(Intensity[0])):
           px, py = Intensity[0][i], Intensity[1][i]
           qx = ox+math.cos(angle)*(px-ox)-math.sin(angle)*(py-oy)
           x_points.append(qx)
           qy = oy+math.sin(angle)*(px-ox)+math.cos(angle)*(py-oy)
           y_points.append(qy)
       rotatecoordinates=[]
       rotatecoordinates.append(x_points)
       rotatecoordinates.append(y_points)
       return rotatecoordinates

    def animation(list, Intensity):
       inc=0
       for value in list:
           item = np.array(value)
           rotated=rotate([128,128],Intensity,inc)
           im=plt.imshow(item, interpolation='nearest')
           display(rotated)
           inc+=1

    Image_list=[]
    for i in range(0,50):
       array=np.linspace(0,1,256*256)
       mat=np.reshape(array,(256,256))
       img=Image.fromarray(np.uint8(mat*255),'L') #create images
       Image_list.append(img)

    myROI=([100,150,150,100,100],[100,100,150,150,100]) #region of interest on image
    animation(Image_list,myROI)

    I would like to produce a video file using the images produced. I can’t use the module imageio, imagemagick, opencv, cv2 etc. I think ffmpeg would work, I have the following code.

    def save():
       os.system("ffmpeg -r 1 -i img%01d.png -vcodec mpeg4 -y movie.mp4")

    I don’t understand how to use it in relation to the code I already have. It doesn’t take any arguments, how would I relate it to the images I have ? I know how to use imagej/fiji to produce videos from images but I would like to do this in python and also it runs out of memory (I have a lot of images, over 2000). Any help would be appreciated, thank you.

  • Curator of the Samples Archive

    13 mai 2011, par Multimedia Mike — General

    Remember how I mirrored the world-famous MPlayerHQ samples archive a few months ago ? Due to a series of events, the original archive is no longer online. However, me and the people who control the mplayerhq.hu domain figured out how to make samples.mplayerhq.hu point to samples.multimedia.cx.

    That means... I’m the current owner and curator of our central multimedia samples repository. Such power ! This should probably be the fulfillment of a decade-long dream for me, having managed swaths of the archive, most notably the game formats section.

    How This Came To Be

    If you pay any attention to the open source multimedia scene, you might have noticed that there has been a smidge of turmoil. Heated words were exchanged, authority was questioned, some people probably said some things they didn’t mean, and the upshot is that, where once there was one project (FFmpeg), there are now 2 projects (also Libav). And to everyone who has wanted me to mention it on my blog— there, I finally broke my silence and formally acknowledged the schism.

    For my part, I was just determined to ensure that the samples archive remained online, preferably at the original samples.mplayerhq.hu address. There are 10 years worth of web links out there pointing into the original repository.

    Better Solution

    I concede that it’s not entirely optimal to host the repository here at multimedia.cx. While I can offer a crazy amount of monthly bandwidth, I can’t offer rsync (invaluable for keeping mirrors in sync), nor can the server provide anonymous FTP or allow me to offer accounts to other admins who can manage the repository.

    The samples archive is also mirrored at samples.libav.org/samples. I understand that service is provided by VideoLAN. Right now, both repositories are known to be static. I’m open to brainstorms about how to improve the situation.

  • Using libavformat to mux H.264 frames into RTP

    18 septembre 2021, par DanielB6

    I have an encoder that produces a series of H.264 I-frames and P-frames. I'm trying to use libavformat to mux and transmit these frames over RTP, but I'm stuck.

    



    My program sends RTP data, but the RTP timestamp increments by 1 each successive frame, instead of 90000/fps. It also doesn't look like it's doing the proper framing for H.264 NAL, since I can't decode the stream as H.264 in Wireshark.

    



    I suspect that I'm not setting up the codec information properly, but it appears in many places in the output format context, so it's unclear what exactly needs to be setup. The examples seem to all copy codec context info from encoders, which isn't my use case.

    



    This is what I'm trying :

    



    int main() {
    AVFormatContext context = avformat_alloc_context();

    if (!context) {
        printf("avformat_alloc_context failed\n");
        return;
    }

    AVOutputFormat *format = av_guess_format("rtp", NULL, NULL);

    if (!format) {
        printf("av_guess_format failed\n");
        return;
    }

    context->oformat = format;

    snprintf(context->filename, sizeof(context->filename), "rtp://%s:%d", "192.168.2.16", 10000);

    if (avio_open(&(context->pb), context->filename, AVIO_FLAG_READ_WRITE) < 0) {
        printf("avio_open failed\n");
        return;
    }

    stream = avformat_new_stream(context, NULL);

    if (!stream) {
        printf("avformat_new_stream failed\n");
        return;
    }

    stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
    stream->codecpar->codec_id = AV_CODEC_ID_H264;
    stream->codecpar->width = 1920;
    stream->codecpar->height = 1080;

    avformat_write_header(context, NULL);

    ...
    write packets
    ...
}


    



    Example write packet :

    



    int write_packet(uint8_t *data, int size) {
    AVPacket p;
    av_init_packet(&p);
    p.data = buffer;
    p.size = size;
    p.stream_index = stream->index;

    av_interleaved_write_frame(context, &p);
}


    



    I've even went so far to build in libx264, find the encoder, and copy the codec context info from there into the stream codecpar, with the same result. My goal is to build without libx264, and any other libs that aren't required, but it isn't clear whether libx264 is required for defaults such as time base.

    



    How can the libavformat RTP muxer be initialized to properly send H.264 frames over RTCP+RTP ?