This repository has been archived by the owner on Apr 18, 2019. It is now read-only.
v2.0 alpha 2 release
Pre-release
Pre-release
Special Notes
- this is a 2.0 alpha 2 release, we did some initial tests on it.
- 29 new features are added, and 47 bugs are fixed till this release.
New Features
- VSM-63 Filter/Identify OSDs that are not up and in
- VSM-61 Openstack Prod release alignment with VSM
- VSM-55 Paginate OSD pages
- VSM-10 VSM1.0 -[CDEK-1190] VSM "Create cluster" cannot select storage nodes or monito
- VSM-304 Support to present pool to Openstack Juno All in one
- VSM-282 current installer scripts only support ip address based, it's also required to support host name based
- VSM-256 current logic enforces 3 replicas, somethings user may require to loose or tighten the enforcement.
- VSM-257 reorganize functions inside installer to make it's possible to install controller or agent separately
- VSM-243 the "create cluster" button doesn't take in effect after click on ubuntu 14
- VSM-235 make scripts can support both rpm and deb packages
- VSM-222 It's necessary to import a VSM created cluster by VSM itself.
- VSM-221 upgrade vsm dependent openstack packages to juno
- VSM-212 request to support ceph hammer release
- VSM-209 support multiple subnets
- VSM-189 “Add Server” progress
- VSM-163 server_manifest does not check for local host in /etc/hosts
- VSM-154 Request to have a new column on storage-group-status page to have number of OSD's in each storage group.
- VSM-143 On OSD Status page, limit to 100 OSDs per page
- VSM-144 On OSD Status page, sort OSDs by status, server, % capacity used
- VSM-130 OpenStack Juno Support
- VSM-94 Implement Pass-through Parameters
- VSM-85 Storage Group Status improvements
- VSM-89 Full OS disk indication
- VSM-84 Improved server status reporting
- VSM-83 Restart stopped monitor
- VSM-69 Override default PG per OSD value
- VSM-73 Zone monitor warning
- VSM-46 overlay vsm with existing ceph cluster
- VSM-23 Create User UI should prompt more information
Resolved bugs
- VSM-305 presenting pool should keep backward comatibility, say working with juno, icehouse, havana
- VSM-300 if no cluster is created, when open "server management""manage servers", the Ui becomes messy
- VSM-298 on dashboard, when adding new monitors, the monitor block doesn't update accordingly
- VSM-297 on dashboard, the storage group balls have no data
- VSM-293 wrong version of vsm displayed in the dashboard
- VSM-295 the installation fails at downloading dependent packages if http proxy is required
- VSM-292 the background of tool tip is not clear to view
- VSM-286 present pools to openstack again after you have presented pools, the attach status is always starting
- VSM-285 present multi pools to openstack cinder, the cinder.conf is not complete
- VSM-287 messy login window if accessing https:///dashboard
- VSM-284 at cluster status monitor, it's expected to provide complete summary instead the partial information from dashboardard
- VSM-283 after restart ceph cluster, the osd tree becomes messy
- VSM-280 "Internal Server Error" when click on "Data Device Status"
- VSM-278 after I add a server by clicking 'add server',the osd tree become messy
- VSM-279 click 'restart osd' or 'restore osd',the osd tree will become messy
- VSM-281 on dashboard, the cluster status like "warning" shouldn't be clickable
- VSM-276 Allow agent to be deployed on nodes where no ceph role assigned
- VSM-277 click on "add storage group" will tigger "something went wrong" error on UI
- VSM-272 on OS with newer rabbitmq version like 3.4, message queue can't be accessed
- VSM-273 "Import Error name Login" received when try to login to web console
- VSM-275 the overview page is messy with Firefox
- VSM-270 reinstall the keystone when deploy the vsm
- VSM-266 Dashboard UI correction
- VSM-265 Click Manage VSM"Add/Remove User""Change Password", no page shows up
- VSM-264 on "PG Status" page, the text layout is messy.
- VSM-267 The logo is not fully shown.
- VSM-268 Can't create EC pool
- VSM-262 when clicking on "device management", the page shows "something went wrong"
- VSM-258 wrong dependencies url
- VSM-259 script 'vsm-agent' missing LSB tags and overrides
- VSM-253 A lot of buttons is invalid such as “addmonitor”,“removemonitor”,“addserver”,“removeserver”,“restore osd”,“remove osd” and so on
- VSM-254 the successful tip boxes can't shut down such as "Success: Successfully created storage pool: "
- VSM-255 when creating ceph cluster with one zone defined, the ceph cluster keeps in health_warn status
- VSM-252 Most of the interface's auto refresh function is invalid
- VSM-251 after clicked clusterManagement->create cluster->create cluster,there is not mds created.
- VSM-250 Error when packing
- VSM-245 error creating cluster
- VSM-231 VSM | Not possible to choose the machines during cluster creation
- VSM-229 Error state displayed on green color
- VSM-225 VSM Creates Very Small pg_num and pg._num Size for EC Pool
- VSM-218 on centos 7, the web layout is messy
- VSM-195 if no cluster initialized, the UI shouldn’t knock at ‘manage servers’.
- VSM-194 User experience: OSD Status and Manage Devices.
- VSM-180 When a cluster is destroyed and recreated, VSM controller increments the cluster id (aka database rowid)
- VSM-153 after clicked vsm->settings->update,there is no successful tip
- VSM-68 Veriffy there are enough discrete servers and/or zones to support a VSM requested replication level
- VSM-5 KB used in "Pool status" panel not consistent with "ceph df" in backend
Known issues
- VSM-313 the logo is disappeared when running on IE 11