
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (92)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (12224)
-
aarch64 : Add NEON optimizations for 10 and 12 bit vp9 MC
14 décembre 2016, par Martin Storsjöaarch64 : Add NEON optimizations for 10 and 12 bit vp9 MC
This work is sponsored by, and copyright, Google.
This has mostly got the same differences to the 8 bit version as
in the arm version. For the horizontal filters, we do 16 pixels
in parallel as well. For the 8 pixel wide vertical filters, we can
accumulate 4 rows before storing, just as in the 8 bit version.Examples of runtimes vs the 32 bit version, on a Cortex A53 :
ARM AArch64
vp9_avg4_10bpp_neon : 35.7 30.7
vp9_avg8_10bpp_neon : 93.5 84.7
vp9_avg16_10bpp_neon : 324.4 296.6
vp9_avg32_10bpp_neon : 1236.5 1148.2
vp9_avg64_10bpp_neon : 4639.6 4571.1
vp9_avg_8tap_smooth_4h_10bpp_neon : 130.0 128.0
vp9_avg_8tap_smooth_4hv_10bpp_neon : 440.0 440.5
vp9_avg_8tap_smooth_4v_10bpp_neon : 114.0 105.5
vp9_avg_8tap_smooth_8h_10bpp_neon : 327.0 314.0
vp9_avg_8tap_smooth_8hv_10bpp_neon : 918.7 865.4
vp9_avg_8tap_smooth_8v_10bpp_neon : 330.0 300.2
vp9_avg_8tap_smooth_16h_10bpp_neon : 1187.5 1155.5
vp9_avg_8tap_smooth_16hv_10bpp_neon : 2663.1 2591.0
vp9_avg_8tap_smooth_16v_10bpp_neon : 1107.4 1078.3
vp9_avg_8tap_smooth_64h_10bpp_neon : 17754.6 17454.7
vp9_avg_8tap_smooth_64hv_10bpp_neon : 33285.2 33001.5
vp9_avg_8tap_smooth_64v_10bpp_neon : 16066.9 16048.6
vp9_put4_10bpp_neon : 25.5 21.7
vp9_put8_10bpp_neon : 56.0 52.0
vp9_put16_10bpp_neon/armv8 : 183.0 163.1
vp9_put32_10bpp_neon/armv8 : 678.6 563.1
vp9_put64_10bpp_neon/armv8 : 2679.9 2195.8
vp9_put_8tap_smooth_4h_10bpp_neon : 120.0 118.0
vp9_put_8tap_smooth_4hv_10bpp_neon : 435.2 435.0
vp9_put_8tap_smooth_4v_10bpp_neon : 107.0 98.2
vp9_put_8tap_smooth_8h_10bpp_neon : 303.0 290.0
vp9_put_8tap_smooth_8hv_10bpp_neon : 893.7 828.7
vp9_put_8tap_smooth_8v_10bpp_neon : 305.5 263.5
vp9_put_8tap_smooth_16h_10bpp_neon : 1089.1 1059.2
vp9_put_8tap_smooth_16hv_10bpp_neon : 2578.8 2452.4
vp9_put_8tap_smooth_16v_10bpp_neon : 1009.5 933.5
vp9_put_8tap_smooth_64h_10bpp_neon : 16223.4 15918.6
vp9_put_8tap_smooth_64hv_10bpp_neon : 32153.0 31016.2
vp9_put_8tap_smooth_64v_10bpp_neon : 14516.5 13748.1These are generally about as fast as the corresponding ARM
routines on the same CPU (at least on the A53), in most cases
marginally faster.The speedup vs C code is around 4-9x.
Signed-off-by : Martin Storsjö <martin@martin.st>
- [DH] libavcodec/aarch64/Makefile
- [DH] libavcodec/aarch64/vp9dsp_init.h
- [DH] libavcodec/aarch64/vp9dsp_init_10bpp_aarch64.c
- [DH] libavcodec/aarch64/vp9dsp_init_12bpp_aarch64.c
- [DH] libavcodec/aarch64/vp9dsp_init_16bpp_aarch64_template.c
- [DH] libavcodec/aarch64/vp9dsp_init_aarch64.c
- [DH] libavcodec/aarch64/vp9mc_16bpp_neon.S
-
How to detect anamorphic video with FFProbe ?
29 mai 2015, par FlavorScapeThis is the output I get using FFProbe for a video I’m certain is anamorphic. I’ve converted it as a test with ffmpeg and the results are consistent with the video having a different PAR and DAR (video is squished). I ran some command line params to fix anamorphic video and it worked. Possibly my diagnosis is incorrect, the PAR and DAR are just plain wrong ?
The code I used to "correct" the anamamorphic is
--custom-anamorphic --display-width 1280 --keep-display-aspect --modulus 8 --crop 0:0:0:0
Is there an additional command with FFProbe to detect for anamorphic ? So far I’m just seeing if the sample_aspect_ratio and display_aspect_ratio are the same or not.
Additionally, a ratio of 0:1 seems incorrect. My video is not infinitely wide. Is there a bug with FFProbe output ?
This is the command
-print_format json -show_format -show_streams {originalFilePath}
FFProbe version N-54233-g86190af built on Jun 27 2013 outputs the following :
"Output{
\"streams\": [
{
\"index\": 0,
\"codec_name\": \"h264\",
\"codec_long_name\": \"H.264/AVC/MPEG-4AVC/MPEG-4part10\",
\"profile\": \"Main\",
\"codec_type\": \"video\",
\"codec_time_base\": \"1/5994\",
\"codec_tag_string\": \"avc1\",
\"codec_tag\": \"0x31637661\",
\"width\": 1280,
\"height\": 720,
\"has_b_frames\": 0,
\"sample_aspect_ratio\": \"0: 1\",
\"display_aspect_ratio\": \"0: 1\",
\"pix_fmt\": \"yuv420p\",
\"level\": 31,
\"r_frame_rate\": \"2997/100\",
\"avg_frame_rate\": \"2997/100\",
\"time_base\": \"1/2997\",
\"start_pts\": 0,
\"start_time\": \"0.000000\",
\"duration_ts\": 204100,
\"duration\": \"68.101435\",
\"bit_rate\": \"3894381\",
\"nb_frames\": \"2041\",
\"disposition\": {
\"default\": 0,
\"dub\": 0,
\"original\": 0,
\"comment\": 0,
\"lyrics\": 0,
\"karaoke\": 0,
\"forced\": 0,
\"hearing_impaired\": 0,
\"visual_impaired\": 0,
\"clean_effects\": 0,
\"attached_pic\": 0
},
\"tags\": {
\"creation_time\": \"2013-05-0318: 33: 37\",
\"language\": \"eng\",
\"handler_name\": \"AppleAliasDataHandler\"
}
},
{
\"index\": 1,
\"codec_name\": \"aac\",
\"codec_long_name\": \"AAC(AdvancedAudioCoding)\",
\"codec_type\": \"audio\",
\"codec_time_base\": \"1/44100\",
\"codec_tag_string\": \"mp4a\",
\"codec_tag\": \"0x6134706d\",
\"sample_fmt\": \"fltp\",
\"sample_rate\": \"44100\",
\"channels\": 2,
\"bits_per_sample\": 0,
\"r_frame_rate\": \"0/0\",
\"avg_frame_rate\": \"0/0\",
\"time_base\": \"1/44100\",
\"start_pts\": 0,
\"start_time\": \"0.000000\",
\"duration_ts\": 3003392,
\"duration\": \"68.104127\",
\"bit_rate\": \"125304\",
\"nb_frames\": \"2933\",
\"disposition\": {
\"default\": 0,
\"dub\": 0,
\"original\": 0,
\"comment\": 0,
\"lyrics\": 0,
\"karaoke\": 0,
\"forced\": 0,
\"hearing_impaired\": 0,
\"visual_impaired\": 0,
\"clean_effects\": 0,
\"attached_pic\": 0
},
\"tags\": {
\"creation_time\": \"2013-05-0318: 33: 37\",
\"language\": \"eng\",
\"handler_name\": \"AppleAliasDataHandler\"
}
}
],
\"format\": {
\"filename\": \"\\\\\\\\dell690\\\\vsf\\\\_asset_intake\\\\v2\\\\ed69c939-4fe1-40dd-a045-db72ed2e0009\\\\original\\\\USTC_Overview2.mov\",
\"nb_streams\": 2,
\"format_name\": \"mov,
mp4,
m4a,
3gp,
3g2,
mj2\",
\"format_long_name\": \"QuickTime/MOV\",
\"start_time\": \"0.000000\",
\"duration\": \"68.100000\",
\"size\": \"34267583\",
\"bit_rate\": \"4025560\",
\"tags\": {
\"major_brand\": \"qt\",
\"minor_version\": \"537199360\",
\"compatible_brands\": \"qt\",
\"creation_time\": \"2013-05-0318: 33: 37\"
}
}
}" -
FFmpeg and Code Coverage Tools
21 août 2010, par Multimedia Mike — FATE Server, PythonCode coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".
The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.
Results
I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.Here’s a link to the data in CSV if you want to play with it yourself.
Using gcov with FFmpeg
To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :./configure \ —extra-cflags="-fprofile-arcs -ftest-coverage" \ —extra-ldflags="-fprofile-arcs -ftest-coverage"
The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run
'gcov sourcefile.c'
. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the
'-o, --object-directory'
option.Resetting Statistics
Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :find . -name "*.gcda" | xargs rm -f
Getting Project-Wide Data
As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,for file in `find . -name "*.c"` \ do \ echo "*****" $file \ gcov -o `dirname $file` `basename $file` \ done > ffmpeg-code-coverage.txt 2>&1
After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.
Further Work
I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).
Source for script : process-gcov-slop.py
PYTHON :-
# !/usr/bin/python
-
-
import re
-
-
lines = open("ffmpeg-code-coverage.txt").read().splitlines()
-
no_coverage = ""
-
coverage = "filename, % covered, total lines\n"
-
problems = ""
-
-
stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
-
for i in xrange(len(lines)) :
-
line = lines[i]
-
if line.startswith("***** ") :
-
filename = line[line.find(’./’)+2 :]
-
i += 1
-
if lines[i].find(":cannot open graph file") != -1 :
-
no_coverage += filename + ’\n’
-
else :
-
while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
-
i += 1
-
try :
-
(percent, total_lines) = stats_exp.findall(lines[i+1])[0]
-
coverage += filename + ’, ’ + percent + ’, ’ + total_lines + ’\n’
-
except IndexError :
-
problems += filename + ’\n’
-
-
open("no_coverage.csv", ’w’).write(no_coverage)
-
open("coverage.csv", ’w’).write(coverage)
-
open("problems.csv", ’w’).write(problems)
-