
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (12)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (2730)
-
Error while transcoding video from one format to another
24 octobre 2019, par MalTecI am using xuggler API to transcode Video from one format to another.
Following the example, provided by
http://wiki.xuggle.com/MediaTool_Introduction & http://www.javacodegeeks.com/2011/02/xuggler-tutorial-transcoding-media.html
public void convertVideo() {
String sourceUrl = getResourceDirectory() + "/in/AV36_1.AVI";
String destUrl = getResourceDirectory() + "/out/output.mp4";
IMediaReader reader = ToolFactory.makeReader(sourceUrl);
// add a viewer to the reader, to see progress as the media is
// transcoded
reader.addListener(ToolFactory.makeViewer(true));
// create a writer which receives the decoded media from
// reader, encodes it and writes it out to the specified file
IMediaWriter writer = ToolFactory.makeWriter(destUrl, reader);
// add a debug listener to the writer to see media writer events
writer.addListener(ToolFactory.makeDebugListener());
////
//// // create the media writer
reader.addListener(ToolFactory.makeWriter(destUrl, reader));
// read packets from the source file, which dispatch events to the
// writer, this will continue until
while (reader.readPacket() == null)
do {} while (false);
}Provides Could Not Open Exception :
Exception in thread "main" java.lang.RuntimeException: could not open: D:\Malhar\project_works\VideoConvertter/resources/in/AV36_1.AVI
at com.xuggle.mediatool.MediaReader.open(MediaReader.java:637)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:434)
at util.VideoEncoder.convertVideo(VideoEncoder.java:38)
at ConvertVideo.main(ConvertVideo.java:12)Have tried with different file to.. But, the result is same.
-
System.Diagnostics.Process pipe (vertical bar) not accepted as argument
28 septembre 2019, par empI’m trying to execute this code using System.Diagnostics.Process. It works fine in command line. But in C# it’s failing on the
|
character.var myProcess = new Process();
var p = new ProcessStartInfo();
var sArgs = " -i emp.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of emp.mp4";
p.FileName = "ffmpeg.exe";
p.CreateNoWindow = false;
p.RedirectStandardOutput = false;
p.UseShellExecute = false;
p.Arguments = sArgs;
myProcess.StartInfo = p;
myProcess.Start();
myProcess.WaitForExit();It gives the following error :
Unable to find a suitable output format for ’|’ : Invalid argument
I’ve looked around on stackoverflow and found the following hint but it is also not working :
var psi = new ProcessStartInfo("ffmpeg.exe");
psi.Arguments =
"\"-i emp.mp3 -f wav -\" | \"neroAacEnc -ignorelength -q 0.5 -if - -of emp.mp4\"";
psi.CreateNoWindow = false;
psi.UseShellExecute = false;
var process = new Process { StartInfo = psi };
process.Start();
process.WaitForExit();gives the following error :
Unrecognized option ’i emp.mp3 -f wav -’
Failed to set value ’|’ for option ’i emp.mp3 -f wav -’ -
How to correctly calculate which segments are ready to be downloaded using MPEG-DASH
24 avril 2019, par igal kWhat i’m trying to do ?
Write a simple MPEG-DASH client using the
SegmentTemplate
pattern to calculate which segments are ready to be downloaded for a live source.A picture taken using
chrome
’s debugging tools at a momentX
showing anmpd
request(8af651fd747.....mpd
) and the actual segments fetched respectfully to that request.Given the following
MPD
<mpd availabilitystarttime="2019-04-24T06:43:32Z" maxsegmentduration="PT4.096S" minbuffertime="PT4.096S" minimumupdateperiod="PT15.835S" profiles="urn:mpeg:dash:profile:isoff-live:2011" publishtime="2019-04-24T11:14:01Z" suggestedpresentationdelay="PT11.878S" timeshiftbufferdepth="PT65.536S" type="dynamic" xmlns="urn:mpeg:dash:schema:mpd:2011">
<location>https://content-aaps1.uplynk.com/channel/8af651fd7473474f86a05ffb0a1c8972.mpd?rmt=wv&amp;cid=8af651fd7473474f86a05ffb0a1c8972&amp;oid=600e5c27541344a1bf3818617ad712ce&amp;prettydash=1&amp;exp=1556091088&amp;rn=4138683939&amp;tc=1&amp;ct=c&amp;sig=5fb7f0c18f3f1d2ad4fdee53c02c1e1ed904bc5e8474f4ebf886d209ff7f21c9&amp;pbs=05b6594bcf4b4728ac1094976a80194d</location>
<period start="PT2826.240S">
<adaptationset maxframerate="30" maxheight="720" maxwidth="1280" mimetype="video/mp4" segmentalignment="true" startwithsap="1">
<representation bandwidth="2604473" codecs="avc1.64001e" framerate="30" height="360" scantype="progressive" width="640">
<baseurl>https://x-default-stgec.uplynk.com/aapm/slices/8c1/600e5c27541344a1bf3818617ad712ce/8c1027496a964b049f1bd5895f8f0412/</baseurl>
<segmenttemplate duration="368640" initialization="https://x-default-stgec.uplynk.com/aapm/slices/8c1/600e5c27541344a1bf3818617ad712ce/8c1027496a964b049f1bd5895f8f0412/$RepresentationID$_init.mp4?pbs=05b6594bcf4b4728ac1094976a80194d&amp;_jt=l&amp;chid=8af651fd7473474f86a05ffb0a1c8972" media="$RepresentationID$$Number%08d$.m4f?pbs=05b6594bcf4b4728ac1094976a80194d&amp;_jt=l&amp;chid=8af651fd7473474f86a05ffb0a1c8972" presentationtimeoffset="254361599" startnumber="690" timescale="90000"></segmenttemplate>
</representation>
</adaptationset>
</period>
<utctiming schemeiduri="urn:mpeg:dash:utc:http-iso:2014" value="https://content-aaps1.uplynk.com/misc/utcservertime"></utctiming>
</mpd>I see that the next segment request should be #3955
What i have tried so far
period.end = 1556104456;
period.start = 2826;
availability_start_time = 1556088212;
max_segment_duration = 4;
time_shift_buffer_depth = 65So, first of all, i read DASH-IF-IOP 4.3 section
4.3.4.2
page #82 and implemented the following code :int k1 = 1;
int period_duration = period.end - (period.start + data_.availability_start_time);
int k2 = ceil((float)period_duration / (float)data_.max_segment_duration);
double duration = ((float)representation.duration / (float)representation.timeScale);
size_t live_edge = std::min(
(int)floor((float)((data_.publish_time - data_.availability_start_time - period.start) / duration)), k2);
size_t oldest = std::max(k1, (int)floor((float)((data_.publish_time - data_.availability_start_time - period.start -
data_.time_shift_buffer_depth) /
duration)));after calculating everything :
k1=1
, k2=3355
,live_edge=3272
andoldest=3256
Also tried using
ffmpeg
’sdashdec.c
for min_segment :
if (c->is_live && pls->fragment_duration)
{
num = pls->first_seq_no + (((get_current_time_in_sec() - c->availability_start_time) - c->time_shift_buffer_depth) * pls->fragment_timescale) / pls->fragment_duration;
}for max_segment :
num = pls->first_seq_no + (((get_current_time_in_sec() - c->availability_start_time)) * pls->fragment_timescale) / pls->fragment_duration;
after a small modification :
size_t pmax = (((data_.publish_time - data_.availability_start_time))) / duration;
size_t pmin = ((data_.publish_time - data_.availability_start_time) - data_.time_shift_buffer_depth) / duration;pmin=3946
pmax=3961
in the ffmpeg example, i had to manually remove the
first_seq_no
variable because it looked like i doubled added theSegmentTemplate@StartNumber
.even after succeeding in this task, how do i exactly build the request list of
Segment(NOW)
---->Segment(LIVE_EDGE)