Testing

Twitter testing out a new “consequences” button? Not really…

I just had an interesting experience after adding a German SIM card to my phone. The Android version I have told the phone to automatically switch all apps into a German mode. This can be handy, but I found it annoying.

Where it got really odd is in the Twitter Lite PWA-ish app. As it runs in a Chrome WebView the browser tries to be extra helpful and translates the interface back to English. That way I ended up with a “consequences” button, which made me do a double-take:

Twitter with a new consequences button

I thought it is a new way to deal with lots of answers and that you can vote answers up and down. I also was impressed that now tweets can like people, but no. “Follow” in German is “Folgen. And as a noun instead of a verb, “Folgen” in German are “Consequences” in English.

One more thing to worry about when a webview gets too clever for its own good.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Level Up Your Cross-Browser Testing

Today we’re announcing a special opportunity for web developers to learn how to build and automate functional browser tests — we’ve partnered with Sauce Labs to offer a special extended trial of their excellent tools, and we’ve created a custom learning resource as part of this trial.

2016: The year of web compat

In 2016 we made a lot of noise about cross-browser compatibility:

Let there be no question: Cross-browser compatibility is essential for a robust, healthy internet, and we’ll continue urging developers to build web experiences that work for more people on more browsers and devices.

That is by no means the end of it, however — we also intend to make it easier to achieve cross-browser compatibility, with tools and information to help developers make the web as interoperable as possible.

Special access to Sauce Labs

Beginning today, we’re introducing an exceptional opportunity for web developers. Our partners — Sauce Labs — are giving web developers special access to a complete suite of cross-browser testing tools, including hundreds of OS/browser combinations and full access to automation features.

The offer includes a commitment-free extended trial access period for Sauce Labs tools, with extra features beyond what is available in the standard trial. To find out more and take advantage of the offer, visit our signup page now.

Access to Mozilla’s new testing tutorials

We’ve also brought together expert trainers from Sauce Labs and MDN to create a streamlined tutorial to help developers get up and running quickly with automated website testing, covering the most essential concepts and skills you’ll need: See Mozilla’s cross-browser testing tutorial.

What’s next

This is a time-limited offer from our friends at Sauce Labs. We want to understand whether access to tools like this can help developers deliver effective, performant, interoperable web experiences across today’s complex landscape of browsers and devices.

Sign up and let us know how it goes. Is the trial helpful to you? Do you have constructive feedback on the tooling? Please let us know in the comments below.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Testing out node and express without a local install or editor

Node.js and Express.js based web apps are getting a lot of attention these days. Setting up the environment is not rocket science, but might be too much effort if all you want to kick the tires of this technology stack.

created

Here’s a simple way to play with both for an hour without having to install anything on your computer or even having access to an editor. This is a great opportunity for showcases, hackathons or a quick proof of concept.

The trick is to use a one-hour trial server of Azure. You don’t need to sign in for anything other than an authentication and you can use Facebook, Google or a Microsoft account for that. You can edit your site right in the browser or by using Git and you can export your efforts for re-use after you discarded the trial server.

Here’s the short process to start playing:

  1. Go to http://tryappservice.azure.com/
  2. Make sure “Web App” is selected (it is the default)
  3. Click “Next”
  4. Select “ExpressJS”
  5. Click “Create”
  6. Sign in with a Facebook, Google or Microsoft account. *
  7. Wait for the server to get set up (1-2 minutes)
  8. Click the URL of your app to see it
  9. Select “Edit with Visual Studio online”
  10. Start editing your site live in the browser

* This is a prevention measure to avoid people spawning lots of bots with this service. Your information isn’t stored, you won’t be contacted and you don’t need to provide any payment details.

You can also clone the site, edit it locally and push to the server with Git. The server is available for an hour and will expire after that. In addition to Git you can also download the app or it’s publishing profile.

Video

There is a quick video of me showing setting up a trial server on YouTube.

Video transcript

Hello I am Chris Heilmann and I want to quickly show you how to set up a
node and Express.js server without having to install anything on your computer. This is a good way to try these technologies out without having to mess with your settings, without having to have an editor. Or if you want to show something to a client who don’t know what node and express do for them.

What you do is you go to a website that allows you to try out Microsoft Azure without having to do anything like buying it, or signing up for it or something like that.

Everything happens in the browser. You go to tryappwebservice.azure.com and then you say you want to start an application. I say I want to have a web application and if I click Next I have a choice of saying I want an asp.net page or whatever. But now I just want to have an Express.js page here right now. I click this one, I say
create and then it can sign in with any of these settings. We only do that so spammers don’t create lots and lots of files and lots and lots of servers.

There’s no billing going on there is no recording going on. You can sign out again afterwards. You can log in with Facebook, log in with Google or you can login with Microsoft. In this case let’s sign in with Google – it’s just my account here – and I sign in with it.

Then I can select what kind of application I want. So now it’s churning in the background and setting up a machine for me on Azure that I will have for one hour. It has a node install and Express.js install and also gives me an online editor so I can play with the information without having to use my own editor or having an editor on my computer.

This takes about a minute up to two minutes depending on how how much traffic we have on the servers but I’ve never waited more than five minutes.

So there you go – your free web app is available for one hour and will expire in in 58 minutes. So you got that much time to play with the express and Node server. After that, you can download the app content, you can download a publishing profile (in case you want to stay on Azure and do something there). Or you can clone or push with git so you can have it on your local machine or you put it on another server afterwards.

What you also have is that you can edit with Visual Studio “Monaco”. So if I click on that one, I get Visual Studio in my browser. I don’t have to download anything, I don’t have to install anything – I can start playing with the application.

The URL is up here so I just click on that one. This is of course not a pretty URL but it’s just for one hour so it’s a throwaway URL anyways. So now I have “express” and “welcome to express” here and if I go to my www root and I start editing in my views and start editing the index.jade file I can say “high at” instead of “welcome to”. It automatically saved it for me in the background so if I now reload the page it should tell me “hi at express” here. I can also go through all the views and all the functionality and the routes – the things that Express.js gives me. If I edit index.js here and instead of “express” I say Azure it again saves automatically for me. I don’t have to do anything else; I just reload the page and instead of “express” it says “Azure”.

If you’re done with it you can just download it or you can download the publishing profile or you can just let it go away. If you say for example you want to have a previous session it will ask you to delete the one that you have right now.

It’s a great way to test things out and you don’t have to install anything on your computer. So if you always wanted to play with node.js and express but you were too timid to go to the command line or to mess up your computer and to have your own editor this is a great way to show people the power of node and express without having to have any server. So, have fun and play with that.

View full post on Christian Heilmann

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Performance Testing Firefox OS With Raptor

When we talk about performance for the Web, a number of familiar questions may come to mind:

  • Why does this page take so long to load?
  • How can I optimize my JavaScript to be faster?
  • If I make some changes to this code, will that make this app slower?

I’ve been working on making these types of questions easier to answer for Gaia, the UI layer for Firefox OS, a completely web-centric mobile device OS. Writing performant web pages for the desktop has its own idiosyncrasies, and writing native applications using web technologies takes the challenge up an order of magnitude. I want to introduce the challenges I’ve faced in making performance an easier topic to tackle in Firefox OS, as well as document my solutions and expose holes in the Web’s APIs that need to be filled.

From now on, I’ll refer to web pages, documents, and the like as applications, and while web “documents” typically don’t need the performance attention I’m going to give here, the same techniques could still apply.

Fixing the lifecycle of applications

A common question I get asked in regard to Firefox OS applications:

How long did the app take to load?

Tough question, as we can’t be sure we are speaking the same language. Based on UX and my own research at Mozilla, I’ve tried at adopt this definition for determining the time it takes an application to load:

The amount of time it takes to load an application is measured from the moment a user initiates a request for the application to the moment the application appears ready for user interaction.

On mobile devices, this is generally from the time the user taps on an icon to launch an app, until the app appears visually loaded; when it looks like a user can start interacting with the application. Some of this time is delegated to the OS to get the application to launch, which is outside the control of the application in question, but the bulk of the loading time should be within the app.

So window load right?

With SPAs (single-page applications), Ajax, script loaders, deferred execution, and friends, window load doesn’t hold much meaning anymore. If we could merely measure the time it takes to hit load, our work would be easy. Unfortunately, there is no way to infer the moment an application is visually loaded in a predictable way for everyone. Instead we rely on the apps to imply these moments for us.

For Firefox OS, I helped develop a series of conventional moments that are relevant to almost every application for implying its loading lifecycle (also documented as a performance guideline on MDN):

navigation loaded (navigationLoaded)

The application designates that its core chrome or navigation interface exists in the DOM and has been marked as ready to be displayed, e.g. when the element is not display: none or any other functionality that would affect the visibility of the interface element.

navigation interactive (navigationInteractive)

The application designates that the core chrome or navigation interface has its events bound and is ready for user interaction.

visually loaded (visuallyLoaded)

The application designates that it is visually loaded, i.e., the “above-the-fold” content exists in the DOM and has been marked as ready to be displayed, again not display: none or other hiding functionality.

content interactive (contentInteractive)

The application designates that it has bound the events for the minimum set of functionality to allow the user to interact with “above-the-fold” content made available at visuallyLoaded.

fully loaded (fullyLoaded)

The application has been completely loaded, i.e., any relevant “below-the-fold” content and functionality have been injected into the DOM, and marked visible. The app is ready for user interaction. Any required startup background processing is complete and should exist in a stable state barring further user interaction.

The important moment is visually loaded. This correlates directly with what the user perceives as “being ready.” As an added bonus, using the visuallyLoaded metric pairs nicely with camera-based performance verifications.

Denoting moments

With a clearly-defined application launch lifecycle, we can denote these moments with the User Timing API, available in Firefox OS starting with v2.2:

window.performance.mark( string markName )

Specifically during a startup:

performance.mark('navigationLoaded');
performance.mark('navigationInteractive');
...
performance.mark('visuallyLoaded');
...
performance.mark('contentInteractive');
performance.mark('fullyLoaded');

You can even use the measure() method to create a measurement between start and another mark, or even 2 other marks:

// Denote point of user interaction
performance.mark('tapOnButton');

loadUI();

// Capture the time from now (sectionLoaded) to tapOnButton
performance.measure('sectionLoaded', 'tapOnButton');

Fetching these performance metrics is pretty straightforward with getEntries, getEntriesByName, or getEntriesByType, which fetch a collection of the entries. The purpose of this article isn’t to cover the usage of User Timing though, so I’ll move on.

Armed with the moment an application is visually loaded, we know how long it took the application to load because we can just compare it to—oh, wait, no. We don’t know the moment of user intent to launch.

While desktop sites may be able to easily procure the moment at which a request was initiated, doing this on Firefox OS isn’t as simple. In order to launch an application, a user will typically tap an icon on the Homescreen. The Homescreen lives in a process separate from the app being launched, and we can’t communicate performance marks between them.

Solving problems with Raptor

Without the APIs or interaction mechanisms available in the platform to be able to overcome this and other difficulties, we’ve build tools to help. This is how the Raptor performance testing tool originated. With it, we can gather metrics from Gaia applications and answer the performance questions we have.

Raptor was built with a few goals in mind:

  • Performance test Firefox OS without affecting performance. We shouldn’t need polyfills, test code, or hackery to get realistic performance metrics.
  • Utilize web APIs as much as possible, filling in gaps through other means as necessary.
  • Stay flexible enough to cater to the many different architectural styles of applications.
  • Be extensible for performance testing scenarios outside the norm.
Problem: Determining moment of user intent to launch

Given two independent applications — Homescreen and any other installed application — how can we create a performance marker in one and compare it in another? Even if we could send our performance mark from one app to another, they are incomparable. According to High-Resolution Time, the values produced would be monotonically increasing numbers from the moment of the page’s origin, which is different in each page context. These values represent the amount of time passed from one moment to another, and not to an absolute moment.

The first breakdown in existing performance APIs is that there’s no way to associate a performance mark in one app with any other app. Raptor takes a simplistic approach: log parsing.

Yes, you read that correctly. Every time Gecko receives a performance mark, it logs a message (i.e., to adb logcat) and Raptor streams and parses the log looking for these log markers. A typical log entry looks something like this (we will decipher it later):

I/PerformanceTiming( 6118): Performance Entry: clock.gaiamobile.org|mark|visuallyLoaded|1074.739956|0.000000|1434771805380

The important thing to notice in this log entry is its origin: clock.gaiamobile.org, or the Clock app; here the Clock app created its visually loaded marker. In the case of the Homescreen, we want to create a marker that is intended for a different context altogether. This is going to need some additional metadata to go along with the marker, but unfortunately the User Timing API does not yet have that ability. In Gaia, we have adopted an @ convention to override the context of a marker. Let’s use it to mark the moment of app launch as determined by the user’s first tap on the icon:

performance.mark('appLaunch@' + appOrigin)

Launching the Clock from the Homescreen and dispatching this marker, we get the following log entry:

I/PerformanceTiming( 5582): Performance Entry: verticalhome.gaiamobile.org|mark|appLaunch@clock.gaiamobile.org|80081.169720|0.000000|1434771804212

With Raptor we change the context of the marker if we see this @ convention.

Problem: Incomparable numbers

The second breakdown in existing performance APIs deals with the incomparability of performance marks across processes. Using performance.mark() in two separate apps will not produce meaningful numbers that can be compared to determine a length of time, because their values do not share a common absolute time reference point. Fortunately there is an absolute time reference that all JS can access: the Unix epoch.

Looking at the output of Date.now() at any given moment will return the number of milliseconds that have elapsed since January 1st, 1970. Raptor had to make an important trade-off: abandon the precision of high-resolution time for the comparability of the Unix epoch. Looking at the previous log entry, let’s break down its output. Notice the correlation of certain pieces to their User Timing counterparts:

  • Log level and tag: I/PerformanceTiming
  • Process ID: 5582
  • Base context: verticalhome.gaiamobile.org
  • Entry type: mark, but could be measure
  • Entry name: appLaunch@clock.gaiamobile.org, the @ convention overriding the mark’s context
  • Start time: 80081.169720,
  • Duration: 0.000000, this is a mark, not a measure
  • Epoch: 1434771804212

For every performance mark and measure, Gecko also captures the epoch of the mark, and we can use this to compare times from across processes.

Pros and Cons

Everything is a game of tradeoffs, and performance testing with Raptor is no exception:

  • We trade high-resolution times for millisecond resolution in order to compare numbers across processes.
  • We trade JavaScript APIs for log parsing so we can access data without injecting custom logic into every application, which would affect app performance.
  • We currently trade a high-level interaction API, Marionette, for low-level interactions using Orangutan behind the scenes. While this provides us with transparent events for the platform, it also makes writing rich tests difficult. There are plans to improve this in the future by adding Marionette integration.

Why log parsing

You may be a person that believes log parsing is evil, and to a certain extent I would agree with you. While I do wish for every solution to be solvable using a performance API, unfortunately this doesn’t exist yet. This is yet another reason why projects like Firefox OS are important for pushing the Web forward: we find use cases which are not yet fully implemented for the Web, poke holes to discover what’s missing, and ultimately improve APIs for everyone by pushing to fill these gaps with standards. Log parsing is Raptor’s stop-gap until the Web catches up.

Raptor workflow

Raptor is a Node.js module built into the Gaia project that enables the project to do performance tests against a device or emulator. Once you have the project dependencies installed, running performance tests from the Gaia directory is straightforward:

  1. Install the Raptor profile on the device; this configures various settings to assist with performance testing. Note: this is a different profile that will reset Gaia, so keep that in mind if you have particular settings stored.
    make raptor
  2. Choose a test to run. Currently, tests are stored in tests/raptor in the Gaia tree, so some manual discovery is needed. There are plans to improve the command-line API soon.
  3. Run the test. For example, you can performance test the cold launch of the Clock app using the following command, specifying the number of runs to launch it:
    APP=clock RUNS=5 node tests/raptor/launch_test
  4. Observe the console output. At the end of the test, you will be given a table of test results with some statistics about the performance runs completed. Example:
[Cold Launch: Clock Results] Results for clock.gaiamobile.org

Metric                            Mean     Median   Min      Max      StdDev  p95
--------------------------------  -------  -------  -------  -------  ------  -------
coldlaunch.navigationLoaded       214.100  212.000  176.000  269.000  19.693  247.000
coldlaunch.navigationInteractive  245.433  242.000  216.000  310.000  19.944  274.000
coldlaunch.visuallyLoaded         798.433  810.500  674.000  967.000  71.869  922.000
coldlaunch.contentInteractive     798.733  810.500  675.000  967.000  71.730  922.000
coldlaunch.fullyLoaded            802.133  813.500  682.000  969.000  72.036  928.000
coldlaunch.rss                    10.850   10.800   10.600   11.300   0.180   11.200
coldlaunch.uss                    0.000    0.000    0.000    0.000    0.000   n/a
coldlaunch.pss                    6.190    6.200    5.900    6.400    0.114   6.300

Visualizing Performance

Access to raw performance data is helpful for a quick look at how long something takes, or to determine if a change you made causes a number to increase, but it’s not very helpful for monitoring changes over time. Raptor has two methods for visualizing performance data over time, in order to improve performance.

Official metrics

At raptor.mozilla.org, we have dashboards for persisting the values of performance metrics over time. In our automation infrastructure, we execute performance tests against devices for every new build generated by mozilla-central or b2g-inbound (Note: The source of builds could change in the future.) Right now this is limited to Flame devices running at 319MB of memory, but there are plans to expand to different memory configurations and additional device types in the very near future. When automation receives a new build, we run our battery of performance tests against the devices, capturing numbers such as application launch time and memory at fullyLoaded, reboot duration, and power current. These numbers are stored and visualized many times per day, varying based on the commits for the day.

Looking at these graphs, you can drill down into specific apps, focus or expand your time query, and do advanced query manipulation to gain insight into performance. Watching trends over time, you can even pick out regressions that have sneaked into Firefox OS.

Local visualization

The very same visualization tool and backend used by raptor.mozilla.org is also available as a Docker image. After running the local Raptor tests, data will report to your own visualization dashboard based on those local metrics. There are some additional prerequisites for local visualization, so be sure to read the Raptor docs on MDN to get started.

Performance regressions

Building pretty graphs that display metrics is all well and fine, but finding trends in data or signal within noise can be difficult. Graphs help us understand data and make it accessible for others to easily communicate around the topic, but using graphs for finding regressions in performance is reactive; we should be proactive about keeping things fast.

Regression hunting on CI

Rob Wood has been doing incredible work in our pre-commit continuous integration efforts surrounding the detection of performance regressions in prospective commits. With every pull request to the Gaia repository, our automation runs the Raptor performance tests against the target branch with and without the patch applied. After a certain number of iterations for statistical accuracy, we have the ability to reject patches from landing in Gaia if a regression is too severe. For scalability purposes we use emulators to run these tests, so there are inherent drawbacks such as greater variability in the metrics reported. This variability limits the precision with which we can detect regressions.

Regression hunting in automation

Luckily we have the post-commit automation in place to run performance tests against real devices, and is where the dashboards receive their data from. Based on the excellent Python tool from Will Lachance, we query our historical data daily, attempting to discover any smaller regressions that could have crept into Firefox OS in the previous seven days. Any performance anomalies found are promptly reported to Bugzilla and relevant bug component watchers are notified.

Recap and next steps

Raptor, combined with User Timing, has given us the know-how to ask questions about the performance of Gaia and receive accurate answers. In the future, we plan on improving the API of the tool and adding higher-level interactions. Raptor should also be able to work more seamlessly with third-party applications, something that is not easily done right now.

Raptor has been an exciting tool to build, while at the same time helping us drive the Web forward in the realm of performance. We plan on using it to keep Firefox OS fast, and to stay proactive about protecting Gaia performance.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Testing Your Native Android App

It’s an interesting time to be a web developer!

For years Apps have been eating the web and now we are seeing the Web eat the OS. Mozilla is pushing for a world where you can write standards-based, Open Web Apps. These apps should install as native apps and just work, regardless of the platform.

Packaged App on Firefox OS

With the new Firefox for Android, we will now automatically convert your Open Web App into a native Android app.

Packaged App on Android

But how can you test you app on Android, before you submit it to the Marketplace?

We have a new tool to add to your lineup!

Introducing mozilla-apk-cli

If you have NodeJS as well as zip/unzip installed, then you can use our new command line tool to build testable .apk installers.

mozilla-apk-cli is a command-line application, so the commands below would be run from a terminal application.

Getting setup is easy:

npm install mozilla-apk-cli

This will give you a new program mozilla-apk-cli available in ./node_modules/.bin/

You can also globally install with

npm install -g mozilla-apk-cli

I’ll assume you put ./node_modules/.bin in your path or installed globally.

Open Web Apps come in three flavors and the CLI can build them all:

  • hosted app
  • packaged app zip file
  • packaged app from a directory of source code

Let’s assume you have a packaged app you are working on.
You have a directory layout like this:

www/
   index.html
   manifest.webapp
   css/
       style.css
   i/
     icon-256.png
     icon-128.png
     icon-60.png
   js/
      main.js

You can build a testable Android app with the following:

mozilla-apk-cli ./www my_test_app.apk

This will produce my_test_app.apk which can be side-loaded to your phone in either of the following ways:

  • Put in on the web and download/install form a browser on your Android device
  • Use adb to install the app
adb install my_test_app.apk

Setting up adb is out of scope for this blog post, but the Android SDK site has some good resources on adb.

adb using USB

Distribution

Do not distribute this test .apk file!!!

If you change your app, rebuild and send the APK to your users, the update will fail to install. With each new version of your app, using Android’s app manager, you have to remove the test app before installing the 2nd version.

mozilla-apk-cli is only for testing and debugging your new Android app locally. When you are happy with your app, you should distribute the Open Web App through the Marketplace or from your website via an install page.

You don’t need to manage an .apk file directly. Just like you don’t need to manage Mac OS X, Windows or Linux builds for your Open Web App.

We’ve baked native Android support deeply into the Firefox for Android runtime. When a user chooses to install an Open Web App, the native app is synthesized on demand using the production APK Factory Service. This works regardless of the website or Marketplace you are installing the Open Web App from.

How does the CLI work?

Back to our new test tool. This tool is a frontend client to the APK Factory Service. It takes your code and sends it to a reviewer instance of the service. Reviewer as opposed to the production release environment.

This service synthesizes an native Android app, builds it with the Android SDK and lastly signs it with a development cert. As mentioned, this synthesized APK is debuggable and should not be distributed.

The nice thing about the service is that you do not have to have Java, ant, the Android SDK and other Android development tools installed on your local computer. You can focus on the web and test on whatever devices you have handy.

Hosted and Packaged Zip files

We just looked at how to test a packaged app which is a directory of source code. Now lets look at the other two modes. If you already have built your packaged app into a .zip file, use:

mozilla-apk-cli ./my_app.zip my_test_app.apk

If you are building a hosted app, use:

mozilla-apk-cli http://localhost:8080/ my_test_app.apk

No Android Devices? No stress

Perhaps you are saying "Sounds cool, but I don’t have any Android devices… How am I supposed to test?"

Good point. We’re enabling this automatically.

On the one hand, don’t worry. One thing mobile first development has taught us, is that there are way more devices and platforms that you will ever have testing resources for. The web is about open standards and progressive enhancement.

Your web app is just going to be a little bit nicer as a native android app, fitting into the OS as the user expects.

It doesn’t use native UI widgets or anything like that, so extensive testing is not required. The rendering engine is gecko from the already installed Firefox for Android.

On the other hand… open standards and compatibility is a nice story, but as web developers, we know things tend to have platform specific bugs. I’d recommend the traditional grading of supported platforms and if Android is a high priority, definitely get a device, Firefox for Android and test your app.

As we make native experiences automatic across platforms (Android, Mac OS X, etc) we are all ears for feedback. What do you think?

mozilla-apk-cli Resources

The source code for the CLI tool is on GitHub. If you need a sample packaged app, here is a demo version. The source code for the APK Factory Service is also on GitHub.

Join us in with ideas, feedback and questions and please file bugs in Marketplace Integration.

Thanks to Wil Clouser for the illustrations.

View full post on Mozilla Hacks – the Web developer blog

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)