Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (40)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (6026)

  • FFMPEG- H.264 encoding BGR image data to YUP420P video file resulting in empty video

    22 septembre 2022, par Cogentleman

    I'm new to FFMPEG and trying to use it to do some screen capture to a video file, but after a lot of online searching I am stumped as to what I'm doing wrong. Basically, I've already done the effort of capturing screen data via DirectX which stores in a BGR pixel format and I'm just trying to put each frame in a video file. There's two functions, setup which does all the ffmpeg initialization work, and addImage which is called in the main program loop and puts each buffer of BGR image data into a video file. The technique I'm doing for this is to make two frames, one with the BGR data and one with YUP420P (doesn't need to be the latter but after a lot of trial and error it was all I was able to get working with H.264), and use sws_scale to copy data between the two, and then send that frame to video.mp4. The file seems to be having data written to it successfully (the file size grows and grows as the program runs), but when I try and view it in VLC I see nothing- indeed, VLC fails to fetch a length of the video, and bringing up codec and media information both are empty. I turned on ffmpeg verbose logging but all that is spit out is the following :

    


    Setting default whitelist 'Epu��'
Timestamps are unset in a packet for stream -1259342440. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
Encoder did not produce proper pts, making some up.


    


    From what I am reading, I understand this to be warnings rather than errors that would totally corrupt my video file. I separately went through all the error codes being spit out and everything seems nominal to me (zero for success for most calls, -11 sometimes for avcodec_receive_packet but the docs indicate that's expected sometimes).

    


    Based on my understanding of things as they are, this should be working, but isn't, and the logs and error codes give me nothing to go on, so someone with experience with this I reckon would save me a ton of time. The code is as follows :

    


    VideoService.h

    


    #ifndef VIDEO_SERVICE_H&#xA;#define VIDEO_SERVICE_H&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;&#xA;class VideoService {&#xA;    public:&#xA;        void setup();&#xA;        void addImage(unsigned char* data, int lineSize, int width, int height, int align);&#xA;    private:&#xA;        AVCodecContext* context;&#xA;        AVFormatContext* formatContext;&#xA;        AVFrame* bgrFrame;&#xA;        AVFrame* yuvFrame;&#xA;        AVStream* videoStream;&#xA;        SwsContext* swsContext;&#xA;};&#xA;&#xA;#endif&#xA;

    &#xA;

    VideoService.cpp

    &#xA;

    #include "VideoService.h"&#xA;#include &#xA;&#xA;void FfmpegLogCallback(void *ptr, int level, const char *fmt, va_list vargs)&#xA;{&#xA;    FILE* f = fopen("ffmpeg.txt", "a");&#xA;    fprintf(f, fmt, vargs);&#xA;    fclose(f);&#xA;}&#xA;&#xA;void VideoService::setup() {&#xA;    int result = 0;&#xA;    av_log_set_level(AV_LOG_VERBOSE);&#xA;    av_log_set_callback(FfmpegLogCallback);&#xA;    bgrFrame = av_frame_alloc();&#xA;    bgrFrame->width = 1920;&#xA;    bgrFrame->height = 1080;&#xA;    bgrFrame->format = AV_PIX_FMT_BGRA;&#xA;    bgrFrame->time_base.num = 1;&#xA;    bgrFrame->time_base.den = 60;&#xA;    result = av_frame_get_buffer(bgrFrame, 1);&#xA;    yuvFrame = av_frame_alloc();&#xA;    yuvFrame->width = 1920;&#xA;    yuvFrame->height = 1080;&#xA;    yuvFrame->format = AV_PIX_FMT_YUV420P;&#xA;    yuvFrame->time_base.num = 1;&#xA;    yuvFrame->time_base.den = 60;&#xA;    result = av_frame_get_buffer(yuvFrame, 1);&#xA;    const AVOutputFormat* outputFormat = av_guess_format("mp4", "video.mp4", "video/mp4");&#xA;    result = avformat_alloc_output_context2(&#xA;        &amp;formatContext,&#xA;        outputFormat,&#xA;        "mp4",&#xA;        "video.mp4"&#xA;    );&#xA;    formatContext->oformat = outputFormat;&#xA;    const AVCodec* codec = avcodec_find_encoder(AVCodecID::AV_CODEC_ID_H264);&#xA;    result = avio_open2(&amp;formatContext->pb, "video.mp4", AVIO_FLAG_WRITE, NULL, NULL);&#xA;    videoStream = avformat_new_stream(formatContext, codec);&#xA;    AVCodecParameters* codecParameters = videoStream->codecpar;&#xA;    codecParameters->codec_type = AVMediaType::AVMEDIA_TYPE_VIDEO;&#xA;    codecParameters->codec_id = AVCodecID::AV_CODEC_ID_HEVC;&#xA;    codecParameters->width = 1920;&#xA;    codecParameters->height = 1080;&#xA;    codecParameters->format = AVPixelFormat::AV_PIX_FMT_YUV420P;&#xA;    videoStream->time_base.num = 1;&#xA;    videoStream->time_base.den = 60;&#xA;    result = avformat_write_header(formatContext, NULL);&#xA;    &#xA;    codec = avcodec_find_encoder(videoStream->codecpar->codec_id);&#xA;    context = avcodec_alloc_context3(codec);&#xA;    context->time_base.num = 1;&#xA;    context->time_base.den = 60;&#xA;    avcodec_parameters_to_context(context, videoStream->codecpar);&#xA;    result = avcodec_open2(context, codec, nullptr);&#xA;    swsContext = sws_getContext(1920, 1080, AV_PIX_FMT_BGRA, 1920, 1080, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);&#xA;}&#xA;&#xA;void VideoService::addImage(unsigned char* data, int lineSize, int width, int height, int align) {&#xA;    int result = 0;&#xA;    result = av_image_fill_arrays(bgrFrame->data, bgrFrame->linesize, data, AV_PIX_FMT_BGRA, 1920, 1080, 1);&#xA;    sws_scale(swsContext, bgrFrame->data, bgrFrame->linesize, 0, 1080, &amp;yuvFrame->data[0], yuvFrame->linesize); &#xA;    result = avcodec_send_frame(context, yuvFrame);&#xA;    AVPacket *packet = av_packet_alloc();&#xA;    result = avcodec_receive_packet(context, packet);&#xA;    if (result != 0) {&#xA;        return;&#xA;    }&#xA;    result = av_interleaved_write_frame(formatContext, packet);&#xA;}&#xA;

    &#xA;

    My environment is windows 10, I'm building with clang++ 12.0.1, and using the FFMPEG 5.1 libs.

    &#xA;

  • Handling high volume traffic and traffic peaks with Matomo just got easier

    16 avril 2018, par Matomo Core Team

    When you use the self-hosted version of Matomo on-premise instead of the Matomo cloud-hosted solution, you may experience some traffic peaks on your Matomo server when the traffic volume on your websites increases. For example, every day at a certain time you might receive two or three times the amount of traffic that usually visits your website. This can have many negative impacts, including :

    • Slow loading time for your JavaScript tracker (piwik.js) which in turn may slow down your website giving your users a poor experience. Also you may see less page views in Matomo because by the time the tracker is loaded on your website, the user has already moved on to another page.
    • Some tracking requests might be simply ignored at some point because your server might not be able to handle any tracking requests anymore which results in many untracked visits and page views.
    • You may need additional servers only to handle traffic peaks which results in increased server costs, maintenance work and maintenance costs.

    The solution

    Handling traffic peaks has been possible with Matomo for years using the Queued Tracking plugin. When this feature is enabled, tracking requests are put into a queue instead of being processed immediately. Then when a job is running separately it takes the requests out of the queue and processes them. This brings various benefits.

    Faster tracking

    It improves the tracking speed on your server by a factor of 5 to 15. So for example, instead of a tracking request taking 50ms, it takes only 5ms. This means your server will be able to handle a lot more concurrent requests compared to the traditional tracking and is likely to survive traffics peaks much more likely without any trouble at all.

    Faster processing

    When a request is queued, the request still needs to be processed eventually. Because the Queued Tracking solution can take multiple tracking requests out of the queue at once and process them in one go, the processing speed increases massively as well. This is because by default each tracking request has to bootstrap Matomo and do a lot of things again and again which takes quite a bit of time (you’d be surprised). Instead, many things can now be cached and don’t have to be done multiple times. As a result, your server can process tracking requests much faster and needs less resources overall which in turn reduces cost and trouble.

    Queued Tracking is now easier to set up

    In the background, Queued Tracking has been using Redis, an in-memory database. While Redis is very fast, it’s not simple to setup and maintain it. Especially when it comes to making Redis “highly available” and when you need to scale your Redis. Also, your servers will need a lot more memory for Redis as all queued tracking requests are stored in memory.

    One click setup

    We have now added support for a MySQL database so you can activate Queued Tracking with a simple click. What used to take hours or maybe even weeks to set up and a lot of maintenance, can now be cut down to seconds. Queued Tracking will then simply reuse the database that you have been using all along for storing all your visits. A side benefit is that your server won’t need more memory and all queued tracking requests even survive a server reboot.

    Both Redis and MySQL are now supported in Queued Tracking. If you do have experience with managing Redis, we still recommend using this solution as it’s likely a bit faster. However, in most cases the MySQL solution should work just as well.

    Further improvements

    We have made various other improvements for Queued Tracking that increases the performance and you can now be notified when the number of queued tracking requests reaches a certain threshold. View the changelog for a list of all changes.

    Learn more

    We have been setting up Queued Tracking multiple times when it comes to high volume traffic or dealing with peaks and are amazed by the results. Often, we can even reduce the overall amount of needed servers.

    If this sounds like something that could be beneficial to you, we recommend you have a look at the Queued Tracking page and also check out the FAQ. You might be also interested in learning how to configure Matomo for speed.

    Need help with setting up, maintaining, or scaling Matomo ? Get in touch now.

    The post Handling high volume traffic and traffic peaks with Matomo just got easier appeared first on Analytics Platform - Matomo.

  • Subtitling Sierra VMD Files

    1er juin 2016, par Multimedia Mike — Game Hacking

    I was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping ? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.

    The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.

    The pipeline that I envisioned looks like this :


    VMD Subtitling Process

    VMD Subtitling Process


    “Trivial !” I surmised. I just never learn, do I ?

    The Plan
    So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal :

    1. Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
    2. Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
    3. Create a new basic encoder for the video frames.

    A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.

    Computer Science Puzzle
    When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.

    There are more efficient algorithms, however, such as hash tables and trees. Somewhere along the line, FFmpeg helpfully acquired a rarely-used tree data structure, which was perfect for this project.

    So I was pretty pleased with this optimization. Too bad this wouldn’t survive to the end of the effort.

    Another palette-related challenge was the fact that a group of pictures would be accumulating a new palette but that palette needed to be recorded before the group. Thus, the muxer needed to have extra logic to rewind the file when the video encoder transmitted a palette change.

    Video Compression
    VMD has a few methods in its compression toolbox. It can use interframe differencing, it has some RLE, or it can code a frame raw. It can also use a custom LZ-like format on top of these. For early prototypes, I elected to leave each frame coded raw. After the concept was proved, I implemented the frame differencing.


    VMD frame #1

    VMD frame #2

    VMD frame difference
    Top frame compared with the middle frame yields the bottom frame : red pixels indicate changes

    Encoding only those red dots in between vast runs of unchanged pixels yielded a vast measurable improvement. The next step was to try wiring up FFmpeg’s existing LZ compression facilities to the encoder. This turned out to be implausible since VMD’s LZ variant has nothing to do with anything FFmpeg already provides. Fortunately, the LZ piece is not absolutely required and the frame differencing + RLE provides plenty of compression.

    Subtitling
    I’ve never done anything, multimedia programming-wise, concerning subtitles. I guess all the entertainment I care about has always been in my native tongue. What a good excuse to program outside of my comfort zone !

    First, I needed to know how to access FFmpeg’s subtitling facilities. Fortunately, The Translator did the legwork on this matter so I didn’t have to figure it out.

    However, I intuitively had misgivings about this phase. I had heard that the subtitling process performs anti-aliasing. That means that the image would need to be promoted to a higher colorspace for this phase and that the anti-aliasing process would likely push the color count way past 256. Some quick tests revealed this to be the case, as the running color count would leap by several hundred colors as soon as the palette accounting algorithm encountered a subtitle.

    So I dug into the subtitle subsystem. I discovered that the subtitle library operates by creating a linked list of subtitle bitmaps that the client app must render. The bitmaps are comprised of 8-bit alpha transparency values that must be composited onto the target frame (i.e., 0 = transparent, 255 = 100% opaque). For example, the letter ‘H’ :

                                      (with 00s removed)
    13 F8 41 00 00 00 00 68 E4  |  13 F8 41             68 E4    
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF DC D0 D0 D0 D0 E4 EC  |  14 FF DC D0 D0 D0 D0 E4 EC
    14 FF 7E 50 50 50 50 9A EC  |  14 FF 7E 50 50 50 50 9A EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    11 E0 3B 00 00 00 00 5E CE  |  11 E0 3B             5E CE
    

    To get around the color explosion problem, I chose a threshold value and quantized values above and below to 255 and 0, respectively. Further, the process chooses an appropriate color from the existing palette rather than introducing any new colors.

    Muxing Matters
    In order to force VMD into a general purpose media framework, a lot of special information needs to be passed around. Like many paletted codecs, the palette needs to be transmitted from the file demuxer to the video decoder via some side channel. For re-encoding, this also implies that the palette needs to make the trip from the video encoder to the file muxer. As if this wasn’t enough, individual VMD frames have even more data that needs to be ferried between the muxer and codec levels, including frame change boundaries. FFmpeg provides methods to do these things, but I could not always rely on the systems to relay the data in all cases. I was probably doing something wrong ; I accept that. Instead, I just packed all the information at the front of an encoded frame and split it apart in the muxer.

    I could not quite figure out how to get the audio and video muxed correctly. As a result, neither FFmpeg nor the Phantasmagoria engine could replay the files correctly.

    Plan B
    Since I was having so much trouble creating an entirely new VMD file, likely due to numerous unknown bits of the file format, I thought of another angle : re-use the existing VMD file. For this approach, I kept the video encoder and file muxer that I created in the initial phase, but modified the file muxer to emit a special intermediate file. Then, I created a Python tool to repackage the original VMD file using compressed video data in the intermediate file.

    For this phase, I also implemented a command line switch for FFmpeg to disable subtitle blending, to make the feature feel like less of an unofficial hack, as though this nonsense would ever have a chance of being incorporated upstream.

    At this point, I was seeing some success with the complete, albeit roundabout, subtitling process. I constructed a subtitle file using “Spanish I Learned From Mexican Telenovelas” and the frames turned out fairly readable :


    Le puso los cuernos a él

    “she cheated on him”


    es un desgraciado

    “he’s a scumbag” … these random subtitles could fit surprisingly well !


    The few files that I tested appeared to work fine. But then I handed off my work to The Translator and he immediately found a bunch of problems. According to my notes, the problems mostly took the form of flashing, solid color frames. Further, I found tiny, mostly imperceptible flaws in my RLE compressor, usually only detectable by running strict comparison tools ; but I wasn’t satisfied.

    At this point, I think I attempted to just encode the entire palette at the front of each frame, as allowed by the format, but that did not seem to fix any problems. My notes are not completely clear on this matter (likely because I was still trying to figure out the exact problem), but I think it had to do with FFmpeg inserting extra video frames in order to even out gaps in the video framerate.

    Sigh, Plan C
    At this point, I was getting tired of trying to force FFmpeg to do this. So I decided to minimize its involvement using lessons learned up to this point.

    The next pitch :

    1. Create a new C program that can open an existing VMD file and output an identical VMD file. I know this sounds easy, but the specific method of copying entails interpreting individual parts of the file and writing those individual parts to the new file. This is in preparation for…
    2. Import the VMD video decoder functions directly into the program to decode the individual video frames and re-encode them, replacing the video frames as the file is rewritten.
    3. Wire up the subtitle system. During the adventure to disable subtitle blending, I accidentally learned enough about interfacing to the subtitle library to just invoke it directly.
    4. Rewrite the RLE method so that it is 100% correct.

    Off to work I went. That part about lifting the existing VMD decoder functions out of their libavcodec nest turned out to not be that straightforward. As an alternative, I modified the decoder to dump the raw frames to an intermediate file. In doing so, I think I was able to avoid the issue of the duplicated frames that plagued the previous efforts.

    Also, remember how I was really pleased with the palette conversion technique in which I was able to leverage computer science big-O theory ? By this stage, I had no reason to convert the paletted video to RGB in the first place ; all of the decoding, subtitling and re-encoding operates in the paletted colorspace.

    This approach seemed to work pretty well. The final program is subtitle-vmd.c. The process is still a little weird. The modifications in my own FFmpeg fork are necessary to create an intermediate file that the new C tool can operate with.

    Next Steps
    The Translator has found some assorted bugs and corner cases that still need to be ironed out. Further, for extra credit, I need find the change windows for each frame to improve compression just a little more. I don’t think I will be trying for LZ compression, though.

    However, almost as soon as I had this whole system working, The Translator informed me that there is another, different movie format in play in the Phantasmagoria engine called ROBOT, with an extension of RBT. Fortunately, enough of the algorithms have been reverse engineered and re-implemented in ScummVM that I was able to sort out enough details for another subtitling project. That will be the subject of a future post.

    See Also :