
Recherche avancée
Autres articles (7)
-
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (3971)
-
How can I build a custom version of opencv while enabling CUDA and opengl ? [closed]
10 février, par JoshI have a hard requirement of python3.7 for certain libraries (aeneas & afaligner). I've been using the regular opencv-python and ffmpeg libraries in my program and they've been working find.


Recently I wanted to adjust my program to use h264 instead of mpeg4 and ran down a licensing rabbit hole of how opencv-python uses a build of ffmpeg with opengl codecs off to avoid licensing issues. x264 is apparently opengl, and is disabled in the opencv-python library.


In order to solve this issue, I built a custom build of opencv using another custom build of ffmpeg both with opengl enabled. This allowed me to use the x264 encoder with the VideoWriter in my python program.


Here's the dockerfile of how I've been running it :



FROM python:3.7-slim

# Set optimization flags and number of cores globally
ENV CFLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \
 CXXFLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \
 LDFLAGS="-flto -fno-fat-lto-objects -Wl,--gc-sections" \
 MAKEFLAGS="-j\$(nproc)"

# Combine all system dependencies in a single layer
RUN apt-get update && apt-get install -y --no-install-recommends \
 build-essential \
 cmake \
 git \
 wget \
 unzip \
 yasm \
 pkg-config \
 libsm6 \
 libxext6 \
 libxrender-dev \
 libglib2.0-0 \
 libavcodec-dev \
 libavformat-dev \
 libswscale-dev \
 libavutil-dev \
 libswresample-dev \
 nasm \
 mercurial \
 libnuma-dev \
 espeak \
 libespeak-dev \
 libtiff5-dev \
 libjpeg62-turbo-dev \
 libopenjp2-7-dev \
 zlib1g-dev \
 libfreetype6-dev \
 liblcms2-dev \
 libwebp-dev \
 tcl8.6-dev \
 tk8.6-dev \
 python3-tk \
 libharfbuzz-dev \
 libfribidi-dev \
 libxcb1-dev \
 python3-dev \
 python3-setuptools \
 libsndfile1 \
 libavdevice-dev \
 libavfilter-dev \
 libpostproc-dev \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/*

# Build x264 with optimizations
RUN cd /tmp && \
 wget https://code.videolan.org/videolan/x264/-/archive/master/x264-master.tar.bz2 && \
 tar xjf x264-master.tar.bz2 && \
 cd x264-master && \
 ./configure \
 --enable-shared \
 --enable-pic \
 --enable-asm \
 --enable-lto \
 --enable-strip \
 --enable-optimizations \
 --bit-depth=8 \
 --disable-avs \
 --disable-swscale \
 --disable-lavf \
 --disable-ffms \
 --disable-gpac \
 --disable-lsmash \
 --extra-cflags="-O3 -march=native -ffast-math -fomit-frame-pointer -flto -fno-fat-lto-objects" \
 --extra-ldflags="-O3 -flto -fno-fat-lto-objects" && \
 make && \
 make install && \
 cd /tmp && \
 # Build FFmpeg with optimizations
 wget https://ffmpeg.org/releases/ffmpeg-7.1.tar.bz2 && \
 tar xjf ffmpeg-7.1.tar.bz2 && \
 cd ffmpeg-7.1 && \
 ./configure \
 --enable-gpl \
 --enable-libx264 \
 --enable-shared \
 --enable-nonfree \
 --enable-pic \
 --enable-asm \
 --enable-optimizations \
 --enable-lto \
 --enable-pthreads \
 --disable-debug \
 --disable-static \
 --disable-doc \
 --disable-ffplay \
 --disable-ffprobe \
 --disable-filters \
 --disable-programs \
 --disable-postproc \
 --extra-cflags="-O3 -march=native -ffast-math -fomit-frame-pointer -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \
 --extra-ldflags="-O3 -flto -fno-fat-lto-objects -Wl,--gc-sections" \
 --prefix=/usr/local && \
 make && \
 make install && \
 ldconfig && \
 rm -rf /tmp/*

# Install Python dependencies first
RUN pip install --no-cache-dir --upgrade pip setuptools wheel && \
 pip install --no-cache-dir numpy py-spy

# Build OpenCV with optimized configuration
RUN cd /tmp && \
 # Download specific OpenCV version archives
 wget -O opencv.zip https://github.com/opencv/opencv/archive/4.8.0.zip && \
 wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.8.0.zip && \
 unzip opencv.zip && \
 unzip opencv_contrib.zip && \
 mv opencv-4.8.0 opencv && \
 mv opencv_contrib-4.8.0 opencv_contrib && \
 rm opencv.zip opencv_contrib.zip && \
 cd opencv && \
 mkdir build && cd build && \
 cmake \
 -D CMAKE_BUILD_TYPE=RELEASE \
 -D CMAKE_C_FLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections" \
 -D CMAKE_CXX_FLAGS="-O3 -march=native -ffast-math -flto -fno-fat-lto-objects -ffunction-sections -fdata-sections -Wno-deprecated" \
 -D CMAKE_EXE_LINKER_FLAGS="-flto -fno-fat-lto-objects -Wl,--gc-sections" \
 -D CMAKE_SHARED_LINKER_FLAGS="-flto -fno-fat-lto-objects -Wl,--gc-sections" \
 -D CMAKE_INSTALL_PREFIX=/usr/local \
 -D ENABLE_FAST_MATH=ON \
 -D CPU_BASELINE_DETECT=ON \
 -D CPU_BASELINE=SSE3 \
 -D CPU_DISPATCH=SSE4_1,SSE4_2,AVX,AVX2,AVX512_SKX,FP16 \
 -D WITH_OPENMP=ON \
 -D OPENCV_ENABLE_NONFREE=ON \
 -D WITH_FFMPEG=ON \
 -D FFMPEG_ROOT=/usr/local \
 -D OPENCV_EXTRA_MODULES_PATH=/tmp/opencv_contrib/modules \
 -D PYTHON_EXECUTABLE=/usr/local/bin/python3.7 \
 -D PYTHON3_EXECUTABLE=/usr/local/bin/python3.7 \
 -D PYTHON3_INCLUDE_DIR=/usr/local/include/python3.7m \
 -D PYTHON3_LIBRARY=/usr/local/lib/libpython3.7m.so \
 -D PYTHON3_PACKAGES_PATH=/usr/local/lib/python3.7/site-packages \
 -D PYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.7/site-packages/numpy/core/include \
 -D BUILD_opencv_python3=ON \
 -D INSTALL_PYTHON_EXAMPLES=OFF \
 -D BUILD_TESTS=OFF \
 -D BUILD_PERF_TESTS=OFF \
 -D BUILD_EXAMPLES=OFF \
 -D BUILD_DOCS=OFF \
 -D BUILD_opencv_apps=OFF \
 -D WITH_OPENCL=OFF \
 -D WITH_CUDA=OFF \
 -D WITH_IPP=OFF \
 -D WITH_TBB=OFF \
 -D WITH_V4L=OFF \
 -D WITH_QT=OFF \
 -D WITH_GTK=OFF \
 -D BUILD_LIST=core,imgproc,imgcodecs,videoio,python3 \
 .. && \
 make && \
 make install && \
 ldconfig && \
 rm -rf /tmp/*

# Set working directory and copy application code
WORKDIR /app

COPY requirements.txt .

RUN apt-get update && apt-get install -y --no-install-recommends ffmpeg

RUN pip install --no-cache-dir aeneas afaligner && \
 pip install --no-cache-dir -r requirements.txt

COPY . .

# Make entrypoint executable
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]



My trouble now, is I've been considering running parts of my program on my GPU, it's creating graphics for a video after all. I have no idea how to edit my Dockerfile to make the opencv build run with CUDA enabled, every combination I try leads to issues.


How can I tell which version of CUDA, opencv and ffmpeg are compatible with python 3.7 ? I've tried so so many combinations and they all lead to different issues, I've asked various AI agents and they all flounder. Where can I find a reliable source of information about this ?


-
Server Move For multimedia.cx
1er août 2014, par Multimedia Mike — GeneralI made a big change to multimedia.cx last week : I moved hosting from a shared web hosting plan that I had been using for 10 years to a dedicated virtual private server (VPS). In short, I now have no one to blame but myself for any server problems I experience from here on out.
The tipping point occurred a few months ago when my game music search engine kept breaking regardless of what technology I was using. First, I had an admittedly odd C-based CGI solution which broke due to mysterious binary compatibility issues, the sort that are bound to occur when trying to make a Linux binary run on heterogeneous distributions. The second solution was an SQLite-based solution. Like the first solution, this worked great until it didn’t work anymore. Something else mysteriously broke vis-à-vis PHP and SQLite on my server. I started investigating a MySQL-based full text search solution but couldn’t make it work, and decided that I shouldn’t have to either.
Ironically, just before I finished this entire move operation, I noticed that my SQLite-based FTS solution was working again on the old shared host. I’m not sure when that problem went away. No matter, I had already thrown the switch.
How Hard Could It Be ?
We all have thresholds for the type of chores we’re willing to put up with and which we’d rather pay someone else to perform. For the past 10 years, I felt that administering a website’s underlying software is something that I would rather pay someone else to worry about. To be fair, 10 years ago, I don’t think VPSs were a thing, or at least a viable thing in the consumer space, and I wouldn’t have been competent enough to properly administer one. Though I would have been a full-time Linux user for 5 years at that point, I was still the type to build all of my own packages from source (I may have still been running Linux From Scratch 10 years ago) which might not be the most tractable solution for server stability.These days, VPSs are a much more affordable option (easily competitive with shared web hosting). I also realized I know exactly how to install and configure all the software that runs the main components of the various multimedia.cx sites, having done it on local setups just to ensure that my automated backups would actually be useful in the event of catastrophe.
All I needed was the will to do it.
The Switchover Process
Here’s the rough plan :- Investigate options for both VPS providers and mail hosts– I might be willing to run a web server but NOT a mail server
- Start plotting several months in advance of my yearly shared hosting renewal date
- Screw around for several months, playing video games and generally finding reasons to put off the move
- Panic when realizing there are only a few days left before the yearly renewal comes due
So that’s the planning phase. BTW, I chose Digital Ocean for VPS and Zoho for email hosting. Here’s the execution phase I did last week :
- Register with Digital Ocean and set up DNS entries to point to the old shared host for the time being
- Once the D-O DNS servers respond correctly using a manual ‘dig’ command, use their servers as the authoritative ones for multimedia.cx
- Create a new Droplet (D-O VPS), install all the right software, move the databases, upload the files ; and exhaustively document each step, gotcha, and pitfall ; treat a VPS as necessarily disposable and have an eye towards iterating the process with a new VPS
- Use /etc/hosts on a local machine to point DNS to the new server and verify that each site is working correctly
- After everything looks all right, update the DNS records to point to the new server
Finally, flip the switch on the MX record by pointing it to the new email provider.
Improvements and Problems
Hosting on Digital Ocean is quite amazing so far. Maybe it’s the SSDs. Whatever it is, all the sites are performing far better than on the old shared web host. People who edit the MultimediaWiki report that changes get saved in less than the 10 or so seconds required on the old server.Again, all problems are now my problems. A sore spot with the shared web host was general poor performance. The hosting company would sometimes complain that my sites were using too much CPU. I would have loved to try to optimize things. However, the cPanel interface found on many shared hosts don’t give you a great deal of data for debugging performance problems. However, same sites, same software, same load on the VPS is considerably more performant.
Problem : I’ve already had the MySQL database die due to a spike in usage. I had to manually restart it. I was considering a cron-based solution to check if the server is running and restart it if not. In response to my analysis that my databases are mostly read and not often modified, so db crashes shouldn’t be too disastrous, a friend helpfully reminded me that, “You would not make a good sysadmin with attitudes like ‘an occasional crash is okay’.”
To this end, I am planning to migrate the database server to a separate VPS. This is a strategy that even Digital Ocean recommends. I’m hoping that the MySQL server isn’t subject to such memory spikes, but I’ll continue to monitor it after I set it up.
Overall, the server continues to get modest amounts of traffic. I predict it will remain that way unless Dark Shikari resurrects the x264dev blog. The biggest spike that multimedia.cx ever saw was when Steve Jobs linked to this WebM post.
Dropped Sites
There are a bunch of subdomains I dropped because I hadn’t done anything with them for years and I doubt anyone will notice they’re gone. One notable section that I decided to drop is the samples.mplayerhq.hu archive. It will live on, but it will be hosted by samples.ffmpeg.org, which had a full mirror anyway. The lower-end VPS instances don’t have the 53 GB necessary.Going Forward
Here’s to another 10 years of multimedia.cx, even if multimedia isn’t as exciting as it was 10 years ago (personal opinion ; I’ll have another post on this later). But at least I can get working on some other projects now that this is done. For the past 4 months or so, whenever I think of doing some other project, I always remembered that this server move took priority over everything else. -
Finding Optimal Code Coverage
7 mars 2012, par Multimedia Mike — ProgrammingA few months ago, I published a procedure for analyzing code coverage of the test suites exercised in FFmpeg and Libav. I used it to add some more tests and I have it on good authority that it has helped other developers fill in some gaps as well (beginning with students helping out with the projects as part of the Google Code-In program). Now I’m wondering about ways to do better.
Current Process
When adding a test that depends on a sample (like a demuxer or decoder test), it’s ideal to add a sample that’s A) small, and B) exercises as much of the codebase as possible. When I was studying code coverage statistics for the WC4-Xan video decoder, I noticed that the sample didn’t exercise one of the 2 possible frame types. So I scouted samples until I found one that covered both types, trimmed the sample down, and updated the coverage suite.I started wondering about a method for finding the optimal test sample for a given piece of code, one that exercises every code path in a module. Okay, so that’s foolhardy in the vast majority of cases (although I was able to add one test spec that pushed a module’s code coverage from 0% all the way to 100% — but the module in question only had 2 exercisable lines). Still, given a large enough corpus of samples, how can I find the smallest set of samples that exercise the complete codebase ?
This almost sounds like an NP-complete problem. But why should that stop me from trying to find a solution ?
Science Project
Here’s the pitch :- Instrument FFmpeg with code coverage support
- Download lots of media to exercise a particular module
- Run FFmpeg against each sample and log code coverage statistics
- Distill the resulting data in some meaningful way in order to obtain more optimal code coverage
That first step sounds harsh– downloading lots and lots of media. Fortunately, there is at least one multimedia format in the projects that tends to be extremely small : ANSI. These are files that are designed to display elaborate scrolling graphics using text mode. Further, the FATE sample currently deployed for this test (TRE_IOM5.ANS) only exercises a little less than 50% of the code in libavcodec/ansi.c. I believe this makes the ANSI video decoder a good candidate for this experiment.
Procedure
First, find a site that hosts a lot ANSI files. Hi, sixteencolors.net. This site has lots (on the order of 4000) artpacks, which are ZIP archives that contain multiple ANSI files (and sometimes some other files). I scraped a list of all the artpack names.In an effort to be responsible, I randomized the list of artpacks and downloaded periodically and with limited bandwidth (
'wget --limit-rate=20k'
).Run ‘gcov’ on ansi.c in order to gather the full set of line numbers to be covered.
For each artpack, unpack the contents, run the instrumented FFmpeg on each file inside, run ‘gcov’ on ansi.c, and log statistics including the file’s size, the file’s location (artpack.zip:filename), and a comma-separated list of line numbers touched.
Definition of ‘Optimal’
The foregoing procedure worked and yielded useful, raw data. Now I have to figure out how to analyze it.I think it’s most desirable to have the smallest files (in terms of bytes) that exercise the most lines of code. To that end, I sorted the results by filesize, ascending. A Python script initializes a set of all exercisable line numbers in ansi.c, then iterates through each each file’s stats line, adding the file to the list of candidate samples if its set of exercised lines can remove any line numbers from the overall set of lines. Ideally, that set of lines should devolve to an empty set.
I think a second possible approach is to find the single sample that exercises the most code and then proceed with the previously described method.
Initial Results
So far, I have analyzed 13324 samples from 357 different artpacks provided by sixteencolors.net.Using the first method, I can find a set of samples that covers nearly 80% of ansi.c :
<br />
0 bytes: bad-0494.zip:5<br />
1 bytes: grip1293.zip:-ANSI---.---<br />
1 bytes: pur-0794.zip:.<br />
2 bytes: awe9706.zip:-ANSI───.───<br />
61 bytes: echo0197.zip:-(ART)-<br />
62 bytes: hx03.zip:HX005.DAT<br />
76 bytes: imp-0494.zip:IMPVIEW.CFG<br />
82 bytes: ice0010b.zip:_cont'd_.___<br />
101 bytes: bdp-0696.zip:BDP2.WAD<br />
112 bytes: plain12.zip:--------.---<br />
181 bytes: ins1295v.zip:-°VGA°-. н<br />
219 bytes: purg-22.zip:NEM-SHIT.ASC<br />
289 bytes: srg1196.zip:HOWTOREQ.JNK<br />
315 bytes: karma-04.zip:FASHION.COM<br />
318 bytes: buzina9.zip:ox-rmzzy.ans<br />
411 bytes: solo1195.zip:FU-BLAH1.RIP<br />
621 bytes: ciapak14.zip:NA-APOC1.ASC<br />
951 bytes: lght9404.zip:AM-TDHO1.LIT<br />
1214 bytes: atb-1297.zip:TX-ROKL.ASC<br />
2332 bytes: imp-0494.zip:STATUS.ANS<br />
3218 bytes: acepak03.zip:TR-STAT5.ANS<br />
6068 bytes: lgc-0193.zip:LGC-0193.MEM<br />
16778 bytes: purg-20.zip:EZ-HIR~1.JPG<br />
20582 bytes: utd0495.zip:LT-CROW3.ANS<br />
26237 bytes: quad0597.zip:MR-QPWP.GIF<br />
29208 bytes: mx-pack17.zip:mx-mobile-source-logo.jpg<br />
----<br />
109440 bytes total<br />A few notes about that list : Some of those filenames are comprised primarily of control characters. 133t, and all that. The first file is 0 bytes. I wondered if I should discard 0-length files but decided to keep those in, especially if they exercise lines that wouldn’t normally be activated. Also, there are a few JPEG and GIF files in the set. I should point out that I forced the tty demuxer using
-f tty
and there isn’t much in the way of signatures for this format. So, again, whatever exercises more lines is better.Using this same corpus, I tried approach 2– which single sample exercises the most lines of the decoder ? Answer : blde9502.zip:REQUEST.EXE. Huh. I checked it out and ‘file’ ID’s it as a MS-DOS executable. So, that approach wasn’t fruitful, at least not for this corpus since I’m forcing everything through this narrow code path.
Think About The Future
Where can I take this next ? The cloud ! I have people inside the search engine industry who have furnished me with extensive lists of specific types of multimedia files from around the internet. I also see that Amazon Web Services Elastic Compute Cloud (AWS EC2) instances don’t charge for incoming bandwidth.I think you can see where I’m going with this.
See Also :