Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (56)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (9308)

  • Hung out to dry

    31 mai 2013, par Mans — Law and liberty

    Outrage was the general reaction when Google recently announced their dropping of XMPP server-to-server federation from Hangouts, as the search giant’s revamped instant messaging platform is henceforth to be known. This outrage is, however, largely unjustified ; Google’s decision is merely a rational response to issues of a more fundamental nature. To see why, we need to step back and look at the broader instant messaging landscape.

    A brief history of IM

    The term instant messaging (IM) gained popularity in the mid-1990s along with the rise of chat clients such as ICQ, AOL Instant Messenger, and later MSN Messenger. These all had one thing in common : they were closed systems. Although global in the sense of allowing access from anywhere on the Internet, communication was possible only within each network, and only using the officially sanctioned client software. Contrast this with email, where users are free to choose any service provider as well as client software, inter-server communication over open protocols delivering messages to their proper destinations.

    The email picture has, however, not always been so rosy. During the 1970s and 80s a multitude of incompatible email systems (e.g. UUCP and X.400) were in more or less widespread use on various networks. As these networks gave way to the ARPANET/Internet, so did their mail systems to the SMTP email we all use today. A similar consolidation has yet to occur in the area of instant messaging.

    Over the years, a few efforts towards a cross-domain instant messaging have been undertaken. One early example is the Zephyr system created as part of Project Athena at MIT in the late 1980s. While it never saw significant uptake, it is still in use at a few universities. A more successful story is that of XMPP. Conceived under the name Jabber in the late 1990s, XMPP is an open standard specified in a set of IETF RFCs. In addition to being open, a distinguishing feature of XMPP compared to other contemporary IM systems is its decentralised nature, server-to-server connections allowing communication between users with accounts on different systems. Just like email.

    The social network

    A more recent emergence on the Internet is the social network. Although not the first of its kind, Facebook was the first to achieve its level of penetration, both geographically and across social groups. A range of messaging options, including email-style as well as instant messaging (chat), are available, all within the same web interface. What it does not allow is communication outside the Facebook network. Other social networks operate in the same spirit.

    The popularity of social networks, to the extent that they for many constitute the primary means of communication, has in a sense brought back fragmented networks of the 1980s. Even though they share infrastructure, up to and including the browser application, the social networks create walled-off regions of the Internet between which little or no exchange is possible.

    The house that Google built

    In 2005, Google launched Talk, an XMPP-based instant messaging service allowing users to connect using either Google’s official client application or any third-party XMPP client. Soon after, server-to-server federation was activated, enabling anyone with a Google account to exchange instant messages with users of any other federated XMPP service. An in-browser chat interface was also added to Gmail.

    It was arguably only with the 2011 introduction of Google+ that Google, despite its previous endeavours with Orkut and Buzz, had a viable contender in the social networking space. Since its inception, Google+ has gone through a number of changes where features have been added or reworked. Instant messaging within Google+ was until recently available only in mobile clients. On the desktop, the sole messaging option was Hangouts which, although featuring text chat, cannot be considered instant messaging in the usual sense.

    With a sprawling collection of messaging systems (Talk, Google+ Messenger, Hangouts), some action to consolidate them was a logical step. What we got was a unification under the Hangouts name. A redesigned Google+ now sports in-browser instant messaging similar the the Talk interface already present in Gmail. At the same time, the standalone desktop Talk client is discontinued, as is the Messenger feature in mobile Google+. All together, the changes make for a much less confusing user experience.

    The sky is falling down

    Along with the changes to the messaging platform, one announcement stoked anger on the Internet : Google’s intent to discontinue XMPP federation (as of this writing, it is still operational). Google, the (self-described) champions of openness on the Internet were seen to be closing their doors to the outside world. The effects of the change are, however, not quite so earth-shattering. Of the other major messaging networks to offer XMPP at all (Facebook, Skype, and the defunct Microsoft Messenger), none support federation ; a Google user has never been able to chat with a Facebook user.

    XMPP federation appears to be in use mainly by non-profit organisations or individuals running their own servers. The number of users on these systems is hard to assess, though it seems fair to assume it is dwarfed by the hundreds of millions using Google or Facebook. As such, the overall impact of cutting off communication with the federated servers is relatively minor, albeit annoying for those affected.

    A fragmented world

    Rather than chastising Google for making a low-impact, presumably founded, business decision, we should be asking ourselves why instant messaging is still so fragmented in the first place, whereas email is not. The answer can be found by examining the nature of entities providing these services.

    Ever since the commercialisation of the Internet started in the 1990s, email has been largely seen as being part of the Internet. Access to email was a major selling point for Internet service providers ; indeed, many still use the email facilities of their ISP. Instant messaging, by contrast, has never come as part of the basic offering, rather being a third-party service running on top of the Internet.

    Users wishing to engage in instant messaging have always had to seek out and sign up with a provider of such a service. As the IM networks were isolated, most would choose whichever service their friends were already using, and a small number of networks, each with a sustainable number of users, came to dominate. In the early days, dedicated IM services such as ICQ were popular. Today, social networks have taken their place with Facebook currently in the dominant position. With the new Hangouts, Google offers its users the service they want in the way they have come to expect.

    Follow the money

    We now have all the pieces necessary to see why inter-domain instant messaging has never taken off, and the answer is simple : the major players have no commercial incentive to open access to their IM networks. In fact, they have good reason to keep the networks closed. Ensuring that a person leaving the network loses contact with his or her friends, increases user retention by raising the cost of switching to another service. Monetising users is also better facilitated if they are forced to remain on, say, Facebook’s web pages while using its services rather than accessing them indirectly, perhaps even through a competing (Google, say) frontend. The users do not generally care much, since all their friends are already on the same network as themselves.

    While Google Talk was a standalone service, only loosely coupled to other Google products, these aspects were of lesser importance. After all, Google still had access to all the messages passing through the system and could analyse them for advert targeting purposes. Now that messaging is an integrated part of Google+, and thus serves as a direct competitor to the likes of Facebook, the situation has changed. All the reasons for Facebook not to open its network now apply equally to Google as well.

  • FFMPEG : How to avoid audio/video desync in output of crossfaded clips when input is variable frame rate video

    25 décembre 2018, par Anders Lunde

    I’m doing screen recordings of gameplay (Dota2) using my NVIDIA graphics card GeForce experience hardware recording (NVEC Encoder). This creates a variable frame rate output video. My NVIDIA settings are 60 fps 15000 kbps. I have paid a guy to make a program that generates scripts that given start/stop timepoints can extract clips from the video and merge them with crossfade. See example code below. The script works for many input recordings but fails often : The audio and video are desynchronized (usually audio delay) in many of the clips, ca 0.5 seconds. I think it fails more when frame rate dropped more during recording. He does not know how to fix the problem, and I wonder if anyone could point out if anything could be fixed in the script (example below) ?

    Processing speed is quite important (now making a 10 min ’highlight’ video takes ca 7-10 min). Solutions increasing that amount very much more is not of too big interest, unfortunately. His approach has been to work separately with audio and video and merge in the end. He already has a program to make ffmpeg code for working with different scenarios (also adding overlays, adding music, intro/outro) so it would be preferable with some easy fixes to his code and not dramatic redesigning of the logic. But if nothing else can fix the problem, a redesign in logic is ok. Using other tools than ffmpeg is also ok, but should be automatable (scripts/cli) and not increase processing times too much.

    Running the program "mediainfo" on the input video shows that framerate dropped quite low for this input video :

    Frame rate mode : Variable

    Frame rate : 60.000 FPS

    Minimum frame rate : 3.059 FPS

    Maximum frame rate : 63.739 FPS

    Full report here : https://pastebin.com/TX061Wih

    The input video can be downloaded from dropbox here (6 GB) :
    https://www.dropbox.com/s/ftwdgapazbi62pr/fullgame.mp4?dl=0

    Here the example of a script when asked to extract two clips from input video at 9:57 (41 sec length) and 15:45 (28 sec length) and crossfade merge them with a 0.5 crossfade time. There might be some code-remnants from options that are not used in this example (overlays, music, intro/outro). Using the input video above, this creates audio/video desync.

    6 commands excecuted in sequence :

    ffmpeg.exe -loglevel warning -ss 00:09:57 -i fullgame.mp4 -t 00:00:41 -filter_complex "[0:a]afade=t=out:st=40.5:d=0.5[a1]" -map "[a1]" -y out_temp_00.mp4.wav

    ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:09:57 -t 00:00:41 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_00.mp4.ts

    ffmpeg.exe -loglevel warning -ss 00:15:45 -i fullgame.mp4 -t 00:00:28 -filter_complex "[0:a]afade=t=in:st=0:d=0.5[a1]" -map "[a1]" -y out_temp_01.mp4.wav

    ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_01.mp4.ts

    ffmpeg.exe -loglevel warning -i out_temp_00.mp4.wav -i out_temp_01.mp4.wav -y -filter_complex "[0:a]adelay=0|0[a0];[1:a]adelay=40500|40500[a1];[a0][a1]amix=inputs=2:dropout_transition=68.5,atrim=duration=68.5[outa0];[outa0]loudnorm[outa]" -map "[outa]" -ar 48000 -acodec aac -strict -2 fullgame_Output.mp4.aac

    ffmpeg.exe -loglevel warning -i out_temp_00.mp4.ts -i out_temp_01.mp4.ts -y -i fullgame_Output.mp4.aac  -filter_complex "[0:v]trim=start=0.5,setpts=PTS-STARTPTS[0c];[1:v]trim=start=0.5,setpts=PTS-STARTPTS[1c];[0:v]trim=40.5:41,setpts=PTS-STARTPTS[fo];[1:v]trim=0:0.5[fi];[fi]format=pix_fmts=yuva420p,fade=t=in:st=0:d=0.5:alpha=1[z];[fo]format=pix_fmts=yuva420p,fade=t=out:st=0:d=0.5:alpha=1[x];[z]fifo[w];[x]fifo[q];[q][w]overlay[r];[0c][r][1c]concat=n=3[outv]" -map "[outv]" -map 2:a -shortest -acodec copy -vcodec libx264 -preset ultrafast -b 15000k -aspect 1920:1080 fullgame_Output.mp4

    P.S.

    I already asked for help at an ffmpeg chat room. One guy said he knew what the problem was, but didnt know how to fix it(?) :

    [00:10] <kepstin> oh, wait, you're using -vcodec copy
    [00:10] <kepstin> that explains everything.
    [00:10] <kepstin> when you're using -vcodec copy, the start time (set with -ss) is rounded to the nearest keyframe
    [00:10] <kepstin> it's not exact
    [00:11] <kepstin> depending on the keyframe interval, this will result in possibly quite large shifts
    [00:11] <kepstin> (also, your commands are applying audio filters on commands with -an, which is confusing/contradictory)
    [00:12] <birdboy88> so the problem is that the audio temporary clips are not being extracted from the same excat timepoints?
    [00:13] <kepstin> birdboy88: yeah, your audio is being re-encoded to wav so it's being cut sample-accurate, but the video's not being precisely cut.
    [00:16] <birdboy88> kepstin: so I need to use slow seek (?) to extract video accurately? Or somehow extract audio only where there are video keyframes?
    [00:17] <kepstin> birdboy88: i don't know how to extract audio starting at video keyframes with ffmpeg cli. You're already doing slow seek, which doesn't help (you should move the -ss option to before the -i option to speed it up)
    [00:17] <kepstin> if you want accurate video cutting when saving to a file, you have to re-encode the video
    [00:18] <kepstin> (doing this in a single ffmpeg command means you don't have to save to a file, so you can avoid the issue)
    [00:18] * kepstin is off for a bit now
    </kepstin></kepstin></kepstin></birdboy88></kepstin></birdboy88></kepstin></kepstin></kepstin></kepstin></kepstin></kepstin>

    EDIT :
    Everything is done with the latest ffmpeg version.

    I was unable to get Gyan’s code to work. It always loses some audio (audio is either 40.5 or 27.5, so only one audio is used). This is the only one working for me (changes were adelay=40500|40500 and amix=inputs=2[a0] ;[a0]loudnorm) :

    ffmpeg -i fullgame.mp4 -filter_complex "[0]split=2[vpre][vpost];
    [0]asplit=2[apre][apost];
    [vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
    [apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
    [vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
    [apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=40500|40500[apost-t];
    [vpre-t][vpost-t]overlay[v];
    [apre-t][apost-t]amix=inputs=2[a0];[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4

    Then I tried using a similar setup but with 3 clips, but on one machine I got error : "Error while filtering : Cannot allocate memory". And my 16 GB memory machine the processing speed is 0.02x ! Any way to avoid this ? This is the code I tried :

    ffmpeg -i fullgame.mp4 -filter_complex "[0]split=3[vpre][vpost][v3];
    [0]asplit=3[apre][apost][a3];
    [vpre]trim=start=357:duration=41,setpts=PTS-STARTPTS[vpre-t];
    [apre]atrim=start=357:duration=41,asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
    [vpost]trim=start=795:duration=28,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,fade=t=out:st=40.5:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
    [apost]atrim=start=795:duration=28,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,afade=t=out:st=27.5:d=0.5,adelay=40500|40500[apost-t];
    [v3]trim=start=95:duration=30,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5,setpts=PTS+41Û0.5/TB[v3-t];
    [a3]atrim=start=95:duration=30,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=68500|68500[a3-t];
    [vpre-t][vpost-t]overlay[v1];
    [v1][v3-t]overlay[v];
    [apre-t][apost-t][a3-t]amix=inputs=3[a0];
    [a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
  • Further Dreamcast Hacking

    3 février 2011, par Multimedia Mike — Sega Dreamcast

    I’m still haunted by Sega Dreamcast programming, specifically the fact that I used to be able to execute custom programs on the thing (roughly 8-10 years ago) and now I cannot. I’m going to compose a post to describe my current adventures on this front. There are 3 approaches I have been using : Raw, Kallistios, and the almighty Linux.


    Raw
    What I refer to as "raw" is an assortment of programs that lived in a small number of source files (sometimes just one ASM file) and could be compiled with the most basic SH-4 toolchain. The advantage here is that there aren’t many moving parts and not many things that can possibly go wrong, so it provides a good functional baseline.

    One of the original Dreamcast hackers was Marcus Comstedt, who still has his original DC material hosted at the reasonably easy-to-remember URL mc.pp.se/dc. I can get some of these simple demos to work, but not others.

    I also successfully assembled and ran a pair of 256-byte (!!) demos from this old DC scene page.

    KallistiOS
    KallistiOS (or just KOS) was a real-time OS developed for the DC and was popular among the DC homebrew community. All the programming I did back in the day was based around KOS. Now I can’t get any of it to work. More specifically, KOS can’t seem to make it past a certain point in its system initialization.

    The Linux Option
    I was never that excited about running Linux on my Dreamcast. For some hackers, running Linux on a given piece of consumer electronics is the highest attainable goal. Back in the day, I looked at it from a much more pragmatic perspective— I didn’t see much use in running Linux on the DC, not as much as running KOS which was developed to be a much more appropriate fit.

    However, I was able to burn a CD-R of an old binary image of Linux 2.4.5 compiled for the Dreamcast and boot it some months ago. So I at least have a feeling that this should work. I have never cross-compiled a kernel of my own (though I have compiled many, many x86 kernels in my time, so I’m not a total n00b in this regard). I figured this might be a good time to start.

    The first item that worries me is getting a functional cross-compiling toolchain. Fortunately, a little digging in the Linux kernel documentation pointed me in the direction of a bunch of ready-made toolchains hosted at kernel.org. So I grabbed one of the SH toolchains (gcc-4.3.3-nolibc) and got rolling.

    I’m well familiar with the cycle of 'make menuconfig' in order to pick configuration options, and then 'make' to build a kernel (or usually 'make zImage' or 'make bzImage' to create compressed images). For cross compiling, the primary difference seems to be editing the root Makefile in the Linux source code tree (I’m using 2.6.37, the latest stable as of this writing) and setting a value for the CROSS_COMPILE variable. Then, run 'make menuconfig' followed by 'make' as normal.

    The Linux 2.6 series is supposed to support a range of Renesas (formerly Hitachi) SH processors and board configurations. This includes reasonable defaults for the Sega Dreamcast hardware. I got it all compiling except for a series of .S files. Linus Torvalds once helped me debug a program I work on so I thought I’d see if there was something I could help debug here.

    The first issue was with ASM statements of a form similar to :

    mov #0xffffffe0, r1
    

    Now, the DC’s SH-4 is a RISC CPU. A lot of RISC architectures adopt a fixed instruction size of 32 bits. You can’t encode an entire 32-bit immediate value inside of a 32-bit instruction (there would be no room for the instruction encoding). Further, the SH series encoded instructions with a mere 16 bits. The move immediate data instruction only allows for an 8-bit, sign-extended value.

    I decided that the above statement is equivalent to :

    mov #-32, r1
    

    I’ll give this statement the benefit of the doubt that it used to work with the gcc toolchain somewhere along the line. I assume that the assembler is supposed to know enough to substitute the first form with the second.

    The next problem is that an ’sti’ instruction shows up in a number of spots. Using Intel x86 conventions, this is a "set interrupt flag" instruction (I remember that the 6502 CPU had the same instruction mnemonic, though its interrupt flag’s operation was opposite that of the x86). The SH-4 reference manual lists no ’sti’ instruction. When it gets to these lines, the assembler complains about immediate move instructions with too large data, like the instructions above. I’m guessing they must be macro’d to something else but I failed to find where. I commented out those lines for the time being. Probably not that smart, but I want to keep this moving for now.

    So I got the code to compile into a kernel file called ’vmlinux’. I’ve seen this file many times before but never thought about how to get it to run directly. The process has usually been to compress it and send it over to lilo or grub for loading, as that is the job of the bootloader. I have never even wondered what format the vmlinux file takes until now. It seems that ’vmlinux’ is just a plain old ELF file :

    $ file vmlinux
    vmlinux : ELF 32-bit LSB executable, Renesas SH,
    version 1 (SYSV), statically linked, not stripped
    

    The ’dc-tool’ program that uploads executables to the waiting bootloader on the Dreamcast is perfectly cool accepting ELF files (and S-record files, and raw binary files). After a very lengthy upload process, execution fails (resets the system).

    For the sake of comparison, I dusted off that Linux 2.4.5 bootable Dreamcast CD-ROM and directly uploaded the vmlinux file from that disc. That works just fine (until it’s time to go to the next loading phase, i.e., finding a filesystem). Possible issues here could include the commented ’sti’ instructions (could be that they aren’t just decoration). I’m also trying to understand the memory organization— perhaps the bootloader wants the ELF to be based at a different address. Or maybe the kernel and the bootloader don’t like each other in the first place— in this case, I need to study the bootable Linux CD-ROM to see how it’s done.

    Optimism
    Even though I’m meeting with rather marginal success, this is tremendously educational. I greatly enjoy these exercises if only for the deeper understanding they bring for the lowest-level system details.