Product Feedback

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Deep linking to cells

    Could we get deep linking to the cell level? That way, we could reference a particular cell when explaining results, not the whole notebook.

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  2. 1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  3. Support for Spark 2.0.0 snapshot in Notebook

    Provide support for Spark 2.0.0 in notebooks.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  4. Provide tooltips during scroll for markdown headings

    Problem:
    Notebooks are large and hard to navigate around in. Currently, we scroll through them looking for code / headings that are familiar.

    Solution:
    As we scroll through, it would be great to see the headings of markdown pop out on the right. Or, a kind of overview on the right of the headings --- a summary of the notebook. This would be similar to some code editors where they show the entire structure of the code when you scroll. Instead, it might be useful to just show the structure of the Markdown; this would incentive folks to use the…

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  5. I can setup my Databricks account so that it sends out a notification (email, or slack message) when a cluster is created?

    We use slack as our main communication method in our company. We don't want to spin up a cluster for doing some test and forget to shut it down. With cluster-up notification more team members are aware of the running cluster to take care of shutting it down when there is no activity for some period of time.

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  6. Administrators could have a script that ran over notebooks as they were created to prevent people storing AWS keys in them.

    We don't want people putting AWS keys into their notebooks. Mount points are so much nicer. We can search the notebooks for patterns like "secret key", but something that could be scripted and would either always be in force or could be run daily would be nice.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  7. Selectable series color in charts

    Especially important with dashboards...

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  8. I can link a particular cell in a notebook to share.

    When I share my notebook, I will like my reader to start with the meat of the notebook or the relevant portions of the notebook. Links to that particular cell will help that.

    Also when writing emails, it's nice to be very specific by referring the exact cell that I am talking about rather than sharing an open-ended notebook.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  9. Notebook export to JSON

    Ability to export notebook source (and possibly results) to JSON. Notebooks could then be easily parsed.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  10. Completed runs view should show more than 20 runs

    A common use case where 20 is a poor number is scheduled hourly jobs (minimum good default of 25, better default of 50 so that more than one full day may be seen).

    In addition, the user should be able to choose how many runs are shown per page.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  11. Fix the notebook filepath mouseover

    When the notebook name got moved to the left side of the notebook, the filepath mouseover still just shows up centered on the name.

    Then, when you click into the sidebar, the name of your notebook gets covered up.

    See attached screenshots.

    0 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  12. Account setup: Default select time zone based on country/city, choose from map

    When setting up a new account, user is requested to enter country/city etc and then has to select a timezone.

    Clicking on the timezone box shows all available timezones and the user needs to choose one. Since there are many choices, it is hard to find the desired one.

    For most cases, the timezone should match the user's country/city so maybe default populate this field based on user country/city but also provide a world map pop-up with timezones/cities (similar to Linux setup)?

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Account management  ·  Flag idea as inappropriate…  ·  Admin →
  13. SparkNet was made available

    SparkNet runs Caffe neural net models in parallel. Vote for this to help machine learning run on Databricks!

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
  14. Libraries could be loaded from a private Maven repo

    I'd like to use the "Maven Coordinates" feature for adding libraries to our Databricks cloud. The challenge is that this is for a private Maven repo. I see that the "Advanced" options when adding a library have a field for repository. That's a good start, but our repo is protected with basic http auth.

    Can you add support for specifying credentials for the repo?

    26 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
  15. Markdown cells from notebooks embedded with %run were rendered when a notebook is run as a job

    Currently, the Markdown of an embedded notebook only renders when the %run cell is run individually. If "Run All" is used or the notebook is run as a job, the Markdown from embedded notebooks is not displayed.

    This would be useful when building notebooks that are to act as dashboards. Often we want to generate the same visualization for various inputs which involves a parameterized notebook embedded in a notebook that defines the parameters.

    7 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  16. cluster names could be long-lived.

    For example, even beyond stops and starts (as in over the night when we shut down clusters or on weekends). Would be nice to always have a cluster name you could depend on. Might tie back to the other comments on ACL to a cluster.

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  17. Run Notebook During Cluster Creation

    To configure an elastic IP address, it would be great to run a notebook that assigns the IP address to the Spark Driver during cluster creation without having to manually run the notebook each time.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  18. Allow users to control Zoomdata instance

    Zoomdata does not need to be run 24/7 and we would like the ability to start and stop the instance running Zoomdata on demand.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  19. Enable GITHUB Integration at a basic subscription level (Please)

    I keep on coming back to one major thing about your product. We have a basic subscription. With that We do get a lot, but one thing would make our life easier is that you enable Github integration for basic subscriptions. Right now I can’t justify 10x the cost per month to upgrade to profession subscription just for Github integration. I can understand the price hike for enabling more and bigger clusters. Totally get that. But not for something simple like Github. Additional: Export is not sufficient enough because 1: DB export is proprietary and the human readable export is…

    7 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →
  20. incompatibility: Python console is lacking sys.stdout.fileno, lets wget crash

    To reproduce the error include spacy library, then run:

    from spacy.en.download import main
    main()

    /home/ubuntu/databricks/python/local/lib/python2.7/site-packages/wget.pyc in get_console_width()
    142 winsize = array("H", [0] * 4)
    143 try:
    --> 144 ioctl(sys.stdout.fileno(), TIOCGWINSZ, winsize)
    145 except IOError:
    146 pass

    AttributeError: 'ConsoleBuffer' object has no attribute 'fileno'

    You can workaround the problem with following snippet:

    sys.stdout.fileno = lambda: 0

    This problem doesn't appear in IPython notebook, hence I assume the databricks notebook is lacking this attribute

    7 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base