
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (59)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (10190)
-
Finding Optimal Code Coverage
7 mars 2012, par Multimedia Mike — ProgrammingA few months ago, I published a procedure for analyzing code coverage of the test suites exercised in FFmpeg and Libav. I used it to add some more tests and I have it on good authority that it has helped other developers fill in some gaps as well (beginning with students helping out with the projects as part of the Google Code-In program). Now I’m wondering about ways to do better.
Current Process
When adding a test that depends on a sample (like a demuxer or decoder test), it’s ideal to add a sample that’s A) small, and B) exercises as much of the codebase as possible. When I was studying code coverage statistics for the WC4-Xan video decoder, I noticed that the sample didn’t exercise one of the 2 possible frame types. So I scouted samples until I found one that covered both types, trimmed the sample down, and updated the coverage suite.I started wondering about a method for finding the optimal test sample for a given piece of code, one that exercises every code path in a module. Okay, so that’s foolhardy in the vast majority of cases (although I was able to add one test spec that pushed a module’s code coverage from 0% all the way to 100% — but the module in question only had 2 exercisable lines). Still, given a large enough corpus of samples, how can I find the smallest set of samples that exercise the complete codebase ?
This almost sounds like an NP-complete problem. But why should that stop me from trying to find a solution ?
Science Project
Here’s the pitch :- Instrument FFmpeg with code coverage support
- Download lots of media to exercise a particular module
- Run FFmpeg against each sample and log code coverage statistics
- Distill the resulting data in some meaningful way in order to obtain more optimal code coverage
That first step sounds harsh– downloading lots and lots of media. Fortunately, there is at least one multimedia format in the projects that tends to be extremely small : ANSI. These are files that are designed to display elaborate scrolling graphics using text mode. Further, the FATE sample currently deployed for this test (TRE_IOM5.ANS) only exercises a little less than 50% of the code in libavcodec/ansi.c. I believe this makes the ANSI video decoder a good candidate for this experiment.
Procedure
First, find a site that hosts a lot ANSI files. Hi, sixteencolors.net. This site has lots (on the order of 4000) artpacks, which are ZIP archives that contain multiple ANSI files (and sometimes some other files). I scraped a list of all the artpack names.In an effort to be responsible, I randomized the list of artpacks and downloaded periodically and with limited bandwidth (
'wget --limit-rate=20k'
).Run ‘gcov’ on ansi.c in order to gather the full set of line numbers to be covered.
For each artpack, unpack the contents, run the instrumented FFmpeg on each file inside, run ‘gcov’ on ansi.c, and log statistics including the file’s size, the file’s location (artpack.zip:filename), and a comma-separated list of line numbers touched.
Definition of ‘Optimal’
The foregoing procedure worked and yielded useful, raw data. Now I have to figure out how to analyze it.I think it’s most desirable to have the smallest files (in terms of bytes) that exercise the most lines of code. To that end, I sorted the results by filesize, ascending. A Python script initializes a set of all exercisable line numbers in ansi.c, then iterates through each each file’s stats line, adding the file to the list of candidate samples if its set of exercised lines can remove any line numbers from the overall set of lines. Ideally, that set of lines should devolve to an empty set.
I think a second possible approach is to find the single sample that exercises the most code and then proceed with the previously described method.
Initial Results
So far, I have analyzed 13324 samples from 357 different artpacks provided by sixteencolors.net.Using the first method, I can find a set of samples that covers nearly 80% of ansi.c :
<br />
0 bytes: bad-0494.zip:5<br />
1 bytes: grip1293.zip:-ANSI---.---<br />
1 bytes: pur-0794.zip:.<br />
2 bytes: awe9706.zip:-ANSI───.───<br />
61 bytes: echo0197.zip:-(ART)-<br />
62 bytes: hx03.zip:HX005.DAT<br />
76 bytes: imp-0494.zip:IMPVIEW.CFG<br />
82 bytes: ice0010b.zip:_cont'd_.___<br />
101 bytes: bdp-0696.zip:BDP2.WAD<br />
112 bytes: plain12.zip:--------.---<br />
181 bytes: ins1295v.zip:-°VGA°-. н<br />
219 bytes: purg-22.zip:NEM-SHIT.ASC<br />
289 bytes: srg1196.zip:HOWTOREQ.JNK<br />
315 bytes: karma-04.zip:FASHION.COM<br />
318 bytes: buzina9.zip:ox-rmzzy.ans<br />
411 bytes: solo1195.zip:FU-BLAH1.RIP<br />
621 bytes: ciapak14.zip:NA-APOC1.ASC<br />
951 bytes: lght9404.zip:AM-TDHO1.LIT<br />
1214 bytes: atb-1297.zip:TX-ROKL.ASC<br />
2332 bytes: imp-0494.zip:STATUS.ANS<br />
3218 bytes: acepak03.zip:TR-STAT5.ANS<br />
6068 bytes: lgc-0193.zip:LGC-0193.MEM<br />
16778 bytes: purg-20.zip:EZ-HIR~1.JPG<br />
20582 bytes: utd0495.zip:LT-CROW3.ANS<br />
26237 bytes: quad0597.zip:MR-QPWP.GIF<br />
29208 bytes: mx-pack17.zip:mx-mobile-source-logo.jpg<br />
----<br />
109440 bytes total<br />A few notes about that list : Some of those filenames are comprised primarily of control characters. 133t, and all that. The first file is 0 bytes. I wondered if I should discard 0-length files but decided to keep those in, especially if they exercise lines that wouldn’t normally be activated. Also, there are a few JPEG and GIF files in the set. I should point out that I forced the tty demuxer using
-f tty
and there isn’t much in the way of signatures for this format. So, again, whatever exercises more lines is better.Using this same corpus, I tried approach 2– which single sample exercises the most lines of the decoder ? Answer : blde9502.zip:REQUEST.EXE. Huh. I checked it out and ‘file’ ID’s it as a MS-DOS executable. So, that approach wasn’t fruitful, at least not for this corpus since I’m forcing everything through this narrow code path.
Think About The Future
Where can I take this next ? The cloud ! I have people inside the search engine industry who have furnished me with extensive lists of specific types of multimedia files from around the internet. I also see that Amazon Web Services Elastic Compute Cloud (AWS EC2) instances don’t charge for incoming bandwidth.I think you can see where I’m going with this.
See Also :
-
FFMPEG "Could not allocate memory" Errors through php
10 décembre 2019, par ApplepieeI’ve tried to find a solution online for days now but cannot find any solution.
I just switched server (Intel Xeon to AMD) and since the switch I’ve not been able to get ffmpeg conversions working through the php script. ffmpeg was the exact same version and all php settings are set.
All commands that the script executes (copied from log files) were tried in shell and executed with no problems.
The errors look like this :
[124] => handler_name : Video Media Handler
[125] => Stream #8:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
[126] => Metadata:
[127] => handler_name : Sound Media Handler
[128] => Stream mapping:
[129] => Stream #0:0 (h264) -> concat:in0:v0
[130] => Stream #1:0 (h264) -> concat:in1:v0
[131] => Stream #2:0 (h264) -> concat:in2:v0
[132] => Stream #3:0 (h264) -> concat:in3:v0
[133] => Stream #4:0 (h264) -> concat:in4:v0
[134] => Stream #5:0 (h264) -> concat:in5:v0
[135] => Stream #6:0 (h264) -> concat:in6:v0
[136] => Stream #7:0 (h264) -> concat:in7:v0
[137] => scale -> Stream #0:0 (libvpx)
[138] => Press [q] to stop, [?] for help
[139] => [h264 @ 0x34f1dc0] get_buffer() failed
[140] => [h264 @ 0x34f1dc0] thread_get_buffer() failed
[141] => [h264 @ 0x34f1dc0] decode_slice_header error
[142] => [h264 @ 0x34f1dc0] no frame!
[143] => [h264 @ 0x350e680] Error splitting the input into NAL units.
[144] => [h264 @ 0x352af40] Cannot allocate memory.
[145] => [h264 @ 0x352af40] Could not allocate memory
[146] => [h264 @ 0x352af40] h264_slice_header_init() failedError while decoding stream #0:0: Cannot allocate memory
[147] => [h264 @ 0x352af40] Cannot allocate memory.
[148] => [h264 @ 0x352af40] Could not allocate memory
[149] => [h264 @ 0x352af40] h264_slice_header_init() failedError while decoding stream #0:0: Cannot allocate memory
[150] => [h264 @ 0x352af40] Cannot allocate memory.
[151] => [h264 @ 0x352af40] Could not allocate memory
[152] => [h264 @ 0x352af40] h264_slice_header_init() failedError while decoding stream #0:0: Cannot allocate memory
[153] => [h264 @ 0x352af40] Cannot allocate memory.
[154] => [h264 @ 0x352af40] Could not allocate memory
[155] => [h264 @ 0x352af40] h264_slice_header_init() failedError while decoding stream #0:0: Cannot allocate memory
[156] => [h264 @ 0x352af40] Cannot allocate memory.
[157] => [h264 @ 0x352af40] Could not allocate memory
...
..
[211519] => [h264 @ 0x886a3c0] h264_slice_header_init() failedToo many errors when draining, this is a bug. Stop draining and force EOF.
[211520] => Error while decoding stream #7:0: Internal bug, should not have happened
[211521] => Cannot allocate memory.
[211522] => sws: initFilter failed
[211523] => [Parsed_scale_1 @ 0x8e0be40] Failed to configure output pad on Parsed_scale_1
[211524] => Error reinitializing filters!
[211525] => Error while filtering: Operation not permitted
[211526] => Finishing stream 0:0 without any data written to it.
[211527] => [libvpx @ 0x6f4c600] v1.8.1-301-g89375f031
[211528] => Output #0, webm, to '/home/website/public_html/media/videos/tmb/2420/video_copy.webm':
[211529] => Metadata:
[211530] => major_brand : isom
[211531] => minor_version : 512
[211532] => compatible_brands: isomiso2avc1mp41
[211533] => title : Aibeya The Animation
[211534] => encoder : Lavf58.35.100
[211535] => Chapter #0:0: start 0.000000, end 102.102000
[211536] => Metadata:
[211537] => title : Intro
[211538] => Chapter #0:1: start 102.102000, end 110.369000
[211539] => Metadata:
[211540] => title : Title
[211541] => Chapter #0:2: start 110.369000, end 312.179000
[211542] => Metadata:
[211543] => title : Part 1
[211544] => Chapter #0:3: start 312.179000, end 548.415000
[211545] => Metadata:
[211546] => title : Part 2
[211547] => Chapter #0:4: start 548.415000, end 706.831000
[211548] => Metadata:
[211549] => title : Part 3
[211550] => Chapter #0:5: start 706.831000, end 1011.052000
[211551] => Metadata:
[211552] => title : Part 4
[211553] => Chapter #0:6: start 1011.052000, end 1198.823000
[211554] => Metadata:
[211555] => title : Part 5
[211556] => Chapter #0:7: start 1198.823000, end 1501.408000
[211557] => Metadata:
[211558] => title : Part 6
[211559] => Chapter #0:8: start 1501.408000, end 1579.945000
[211560] => Metadata:
[211561] => title : Part 7
[211562] => Chapter #0:9: start 1579.945000, end 1654.293000
[211563] => Metadata:
[211564] => title : Ending
[211565] => Stream #0:0: Video: vp8 (libvpx), yuv420p, 400x240 [SAR 837:785 DAR 279:157], q=10-42, 250 kb/s, 23.98 fps, 1k tbn, 23.98 tbc (default)
[211566] => Metadata:
[211567] => encoder : Lavc58.64.101 libvpx
[211568] => Side data:
[211569] => cpb: bitrate max/min/avg: 0/0/0 buffer size: 600000 vbv_delay: N/A
[211570] => frame= 0 fps=0.0 q=0.0 Lsize= 1kB time=00:00:00.00 bitrate=N/A speed= 0x
[211571] => video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[211572] => Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)
[211573] => Conversion failed!
)Command example used :
ffmpeg_command is /usr/local/bin/ffmpeg -ss 100 -t 1 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 197 -t 2 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 294 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 391 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 488 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 585 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 682 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 779 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -ss 876 -t 3 -i /home/website/public_html/media/videos/iphone/2407.mp4 -filter_complex "[0][1][2][3][4][5][6][7]concat=n=8:v=1:a=0",scale=400:240 -codec:v libx264 -unsharp -b:v 250k -maxrate 250k -bufsize 600k -qmin 10 -qmax 42 -threads 4 -an -y /home/website/public_html/media/videos/tmb/2407/video_copy.mp4
PHP Info :
PHP 5.6
memory_limit 2001M
max_execution_time 7200
upload_max_filesize 2000M
post_max_size 2000M
max_input_time 7200
exec is not disabledServer info
CentOS Linux 7 (Core)
ADVANCE-4 - AMD Epyc 7351P - 128GB DDR4 ECC 2400MHz - 2x HDD SATA 4TB Datacenter Class + 2x SSD NVMe 500GB Enterprise Class Soft RAIDAll help is really appreciated ! Thanks in advance
-
Enhanced Privacy Control : Matomo’s Guide for Consent Manager Platform Integrations
13 février, par Alex Carmona — Development, Latest ReleasesIn today’s digital landscape, protecting user privacy isn’t just about compliance—it’s about building trust and demonstrating respect for user choices. Even though you can use Matomo without requiring consent when properly configured in compliance with privacy regulations, we’re excited to introduce a new Consent Manager Platforms (CMP) category on our Integrations page to make it easier than ever to implement privacy-respecting analytics.
What’s a consent manager platform ?
A Consent Management Platform (CMP) is a tool that helps websites collect, manage, and store user consent for data tracking and cookies in compliance with privacy regulations like GDPR and CCPA. A CMP allows users to choose which types of data they want to share, ensuring transparency and respecting their privacy preferences. By integrating a CMP with Matomo, organisations can make sure that analytics tracking occurs only after obtaining explicit user consent.
Remember, you can configure Matomo to remain fully GDPR compliant, without requiring user consent.
Why consent management matters
With privacy regulations reshaping data collection practices daily, organisations need to ensure that analytics data is gathered only after users have explicitly given their consent. Integrating Matomo with a Consent Management Platform helps you :
- Strengthen regulatory compliance
- Enhance user trust through transparency
- Clearly document consent choices
- Simplify privacy management
By making consent management seamless, you can maintain compliance while delivering a privacy-first experience to your users.
Introducing our CMP integration options
We’ve carefully curated integrations with leading Consent Management Platforms that work seamlessly with Matomo Analytics and Matomo Tag Manager. Our supported platforms include :
Supported consent management platforms
- Osano – Comprehensive consent management with global regulation support
- Cookiebot – Advanced cookie consent and compliance automation
- CookieYes – User-friendly consent management solution
- Tarte au Citron – Open-source consent management tool
- Klaro – Privacy-focused consent management system
- OneTrust – Enterprise-grade privacy management platform
- Complianz for WordPress – Specialised WordPress consent solution
Each platform provides unique features and compliance options, allowing you to select the best fit for your privacy needs.
Getting started with simplified implementation
Ready to enhance your privacy compliance ? We’ve made the integration process straightforward, so you can set up a privacy-compliant analytics environment in just a few steps. Here’s how to begin :
- Explore our new CMP category on the Integrations page
- Select and implement the CMP that best suits your needs
- Check our implementation guides for step-by-step instructions
- Configure your consent management settings in Matomo
- Start collecting analytics data with proper consent management
Moving Forward
As privacy regulations evolve and user expectations around data protection grow, proper consent management is more important than ever. With Matomo’s new CMP integrations, you can ensure compliance while maintaining full control over your analytics data.
Visit our Integrations page and our Implementation guides today to explore these privacy-enhancing solutions and take the next step in your privacy-first analytics journey.