
Recherche avancée
Autres articles (75)
-
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (10264)
-
C# Bitmap to AVI / WMV with Compression
18 juillet 2016, par Digitalsa1ntPrelude :
I’m going to preface this with, I have been learning C# in my spare time at work, and that I have been staring at code for a solid two days trying to wrap my head around this problem. I am developing some software to be used with a visualiser that connects by USB to a standard Desktop PC, the software detects the capture device and loads frames into bitmap using a New Frame Event, this is then displayed in a ’picture box’ as a live video stream. The problem as it sits is trying to encorporate the ability to record the stream and save to file, preferably a WMV or a compressed AVI.
What’s been tried :
I have considered and looked into the following :
SharpAVI - cant seem to get this to compress or save the frames properly as it appears to mainly look at existing AVI files.
AForge.Video.VFW - AVI files can be created but are far too large to be used, due to restrictions on the user areas of the individuals who will be using this software.
AForge.Video.FFMPEG - Again due to considerations of those using this software I can’t have unmanaged DLL’s sat in the output folder with the Executable file, and unfortunately this particular DLL cant be compiled properly using Costura Fody.
AVIFile Library Wrapper (From Code Project) - Again can’t seem to get this to compress a stream correctly from Bitmaps from the New Frame Events.
DirectShow - Appears to use C++ and unfortunately is beyond my skill level at this time.
The Relevant Code Snippets :
Current References :
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Resources;
using System.Drawing.Imaging;
using System.IO;
//Aforge Video DLL's
using AForge.Video;
using AForge.Video.VFW;
using AForge.Video.DirectShow;
//Aforge Image DLL's
using AForge.Imaging;
using AForge.Imaging.Formats;
using AForge.Imaging.Filters;
//AviLibrary
using AviFile;Global Variables :
#region Global Variables
private FilterInfoCollection CaptureDevice; // list of available devices
private VideoCaptureDevice videoSource;
public System.Drawing.Image CapturedImage;
bool toggleMic = false;
bool toggleRec = false;
//aforge
AVIWriter aviWriter;
Bitmap image;
#endregionCode for Displaying Stream
#region Displays the Stream
void videoSource_NewFrame(object sender, NewFrameEventArgs eventArgs)
{
picBoxStream.SizeMode = PictureBoxSizeMode.Zoom;
picBoxStream.Image = (Bitmap)eventArgs.Frame.Clone();// clones the bitmap
if (toggleRec == true)
{
image = (Bitmap)eventArgs.Frame.Clone();
aviWriter.AddFrame(image);
}
}
#endregionCurrent Code for Recording Stream
#region Record Button
private void btnRecord_Click(object sender, EventArgs e)
{
if (toggleRec == false)
{
saveAVI = new SaveFileDialog();
saveAVI.Filter = "AVI Files (*.avi)|*.avi";
if (saveAVI.ShowDialog() == DialogResult.OK)
{
aviWriter = new AVIWriter();
aviWriter.Open(saveAVI.FileName, 1280, 720);
toggleRec = true;
Label lblRec = new Label();
}
}
else if (toggleRec == true)
{
aviWriter.Close();
toggleRec = false;
}
}
#endregionI apoligise if the above code doesn’t look quite right, I have been swapping, changing and recoding those three sections a lot in order to find a working combination. This means that it’s rather untidy but I didn’t see the point in cleaning it all up until I had the code working. That being said really any help that you can provide is greatfully recieved, even if it’s a case of what I want to do just cannot be done.
Thank you in advance.
EDIT : PostScript :
I will be actively working on this monday to friday, so If I make any breakthroughs I will update this Question accordingly, this seems to be a frequently sort after answer so hopefully we can overcome the issues.
-
Add lensfun filter
13 juillet 2018, par Stephen SeoAdd lensfun filter
Lensfun is a library that applies lens correction to an image using a
database of cameras/lenses (you provide the camera and lens models, and
it uses the corresponding database entry's parameters to apply lens
correction). It is licensed under LGPL3.The lensfun filter utilizes the lensfun library to apply lens
correction to videos as well as images.This filter was created out of necessity since I wanted to apply lens
correction to a video and the lenscorrection filter did not work for me.While this filter requires little info from the user to apply lens
correction, the flaw is that lensfun is intended to be used on indvidual
images. When used on a video, the parameters such as focal length is
constant, so lens correction may fail on videos where the camera's focal
length changes (zooming in or out via zoom lens). To use this filter
correctly on videos where such parameters change, timeline editing may
be used since this filter supports it.Note that valgrind shows a small memory leak which is not from this
filter but from the lensfun library (memory is allocated when loading
the lensfun database but it somehow isn't deallocated even during
cleanup ; it is briefly created in the init function of the filter, and
destroyed before the init function returns). This may have been fixed by
the latest commit in the lensfun repository ; the current latest release
of lensfun is almost 3 years ago.Bi-Linear interpolation is used by default as lanczos interpolation
shows more artifacts in the corrected image in my tests.The lanczos interpolation is derived from lenstool's implementation of
lanczos interpolation. Lenstool is an app within the lensfun repository
which is licensed under GPL3.v2 of this patch fixes license notice in libavfilter/vf_lensfun.c
v3 of this patch fixes code style and dependency to gplv3 (thanks to
Paul B Mahol for pointing out the mentioned issues).v4 of this patch fixes more code style issues that were missed in
v3.v5 of this patch adds line breaks to some of the documentation in
doc/filters.texi (thanks to Gyan Doshi for pointing out the issue).v6 of this patch fixes more problems (thanks to Moritz Barsnick for
pointing them out).v7 of this patch fixes use of sqrt() (changed to sqrtf() ; thanks to
Moritz Barsnick for pointing this out). Also should be rebased off of
latest master branch commits at this point.Signed-off-by : Stephen Seo <seo.disparate@gmail.com>
-
mpv / ffplay strungling with —lavfi-complex and -vf
26 novembre 2017, par MarcuzzzThis is what I’m trying to accomplish in mpv.
https://user-images.githubusercontent.com/7437046/33134083-130c2cc0-cf9f-11e7-8f8d-237297dc9c93.pngIt currently works with ffplay.
The code for it looks like this :LAVFI = ("movie='$MOVIE':streams=dv+da [video][audio]; " +
"[video]scale=512:-1, split=3[video1][video2][video3];" +
"[audio]asplit=[audio1][audio2]; " +
"[video1]format=nv12,waveform=graticule=green:mode=column:display=overlay:" +
"mirror=1:components=7:envelope=instant:intensity=0.2, " +
"scale=w=512:h=512:flags=neighbor, " +
"pad=w=812:h=812:color=gray [scopeout]; " +
"[video2]scale=512:-1:flags=neighbor[monitorout]; " +
"[audio1]ebur128=video=1:meter=18:framelog=verbose:peak=true[ebur128out][out1]; " +
"[ebur128out]scale=300:300:flags=fast_bilinear[ebur128scaledout]; " +
"[scopeout][ebur128scaledout]overlay=x=512:eval=init[videoandebu]; " +
"[audio2]avectorscope=s=301x301:r=10:zoom=5, " +
"drawgrid=x=149:y=149:t=2:color=green [vector]; " +
"[videoandebu][monitorout]overlay=y=512:eval=init[comp3]; " +
"[comp3][vector]overlay=x=512:y=300:eval=init, " +
"setdar=1/1, setsar=1/1, " +
"drawtext=fontfile='$FONT':timecode='$TIMECODE':" +
"r=$FPS:x=726:y=0:fontcolor=white[comp]; " +
"[video3]format=nv12,vectorscope=mode=color3, " +
"scale=1:1[vectorout]; "+
"[comp][vectorout]overlay=x=512:y=600:eval=init[out0]")I found it on Github :
https://github.com/Warblefly/FFmpeg-ScopeAfter experimenting a lot. I a came to understand it a bit...
This code works with python and mpv.LAVFI = "[aid1]asplit=3[audio1][audio2][audio3];" + \
"[audio1]avectorscope=s=640x640[audioscope];" + \
"[audio2]ebur128=video=1:meter=18[ebu][ao];" + \
"[audio3]showvolume[showv];" + \
"[vid1]scale=640:-1, split=4[video1][video2][video3][video4];" + \
"[video1]format=nv12[comp];" + \
"[video2]hflip[comp2];" + \
"[video3]format=nv12,[comp3]; " + \
"[comp][audioscope]overlay=y=-160[a];" + \
"[comp2][ebu]overlay[b];" + \
"[comp3][showv]overlay[c];" + \
"[video4][a]hstack=inputs=2[top]; " + \
"[b][c]hstack=inputs=2[bottom]; " + \
"[top][bottom]vstack=inputs=2[vo]"
dos_command = [MPV + 'mpv','--lavfi-complex',LAVFI,filename_raw]
subprocess.check_output(dos_command)But if I change [video3]format=nv12,[comp3] to [video3]waveform[comp3] or
[video3]vectorscope[comp3] it doesn’t work. If I change it to
[video3]negate[comp3] it works...So to explain the above code a bit :
First we split the the incoming audio 3 times.
Then we pass it to 3 measuring outputs... avectorscope, ebur128, showvolume.
I don’t understand ebur128=.....[ebu][ao] the "[ao]" bit...
Then we focus on the video, we split it 4 times.
I use "format", I don’t know exactly why...
Then we start overlay and stacking the "labels".